venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
NIPS
Title Variational Amodal Object Completion Abstract In images of complex scenes, objects are often occluding each other which makes perception tasks such as object detection and tracking, or robotic control tasks such as planning, challenging. To facilitate downstream tasks, it is thus important to reason about the full extent of objects, i.e., seeing behind occlusion, typically referred to as amodal instance completion. In this paper, we propose a variational generative framework for amodal completion, referred to as Amodal-VAE, which does not require any amodal labels at training time, as it is able to utilize widely available object instance masks. We showcase our approach on the downstream task of scene editing where the user is presented with interactive tools to complete and erase objects in photographs. Experiments on complex street scenes demonstrate state-of-the-art performance in amodal mask completion, and showcase high quality scene editing results. Interestingly, a user study shows that humans prefer object completions inferred by our model to the human-labeled ones. 1 Introduction One of the most remarkable properties of the human visual system is the ability to rapidly recognize objects and understand their spatial extent in complex visual scenes, even when objects are barely visible due to occlusion [9, 42]. This is important, as it allows humans to more accurately anticipate what can happen a few moments into the future, and plan accordingly. We expect such a capability to also benefit robotic systems. Reasoning about objects and their extent is also key in other contexts, for example, in semantic image editing tasks. Imagine a user that wants to erase an object from a photograph, and possibly even manipulate objects that are partially hidden behind it. To do this, an A.I. system needs to be able to “complete” the occluded objects in the scene, both in their spatial extent, i.e., their masks, as well as in appearance. This problem is typically referred to as amodal instance completion, and is an important component of many applications. However, most research in the domain of semantic segmentation, has focused on the “modal” perception of the scene [6, 11, 34], i.e., segmenting visible pixels of the objects, for which large-scale annotated datasets are available [7, 23, 41]. The lack of labeled data for amodal segmentation is likely due to the difficulty and ambiguity of the annotation task. Amodal annotation of occluded objects requires a human labeler to draw an imagined contour rather than tracing a visible contour in an image, which requires drawing skills that not all annotators possess. In cases where objects are highly occluded there may also be multiple valid hypotheses for a plausible completion. In this work, we propose a variational generative framework for amodal instance completion, called Amodal-VAE. It does not require amodal labels at training time, and exploits instance masks of visible parts of the objects that are widely available in current datasets. Our approach learns to reconstruct full objects from partial masks by training a variational autoencoder in carefully designed 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. stages that allow us to model the complete mask with a low-dimensional latent representation. The probabilistic framework naturally incorporates the ambiguity in the mask completion task, and is able to produce multiple plausible completions which existing work cannot. We showcase our approach on the downstream task of scene editing where the user is presented with interactive tools to complete and erase objects in an image. Experiments demonstrate significant improvements over the recently released state-of-the-art approach [39]. A user study further reveals that participants strongly prefer amodal masks produced by our model over the human-annotated amodal masks. 2 Related Work We focus our review on amodal mask completion which is the primary contribution of our work. The task of amodal instance segmentation aims at segmenting both visible and occluded parts of an object instance. This is in contrast to traditional semantic segmentation [34, 6] or instance segmentation tasks [11, 28, 1, 24], which aim to segment only the visible pixels of an object. Prior work usually decomposes amodal instance segmentation into instance segmentation and amodal mask completion. Supervision is needed for both stages, typically resorting to either synthetic datasets or human-provided labels, which we discuss below. Real Datasets: Recently, human-labeled real datasets have been collected for amodal instance segmentation. Authors extended KITTI [10] to create KINS [27], and COCO [23] to create COCOA [42]. However, there is little available labeled data, in part due to the ambiguity of the labeling task. Synthetic Datasets: One plausible way to get amodal labels is to exploit graphics renderers [14, 43, 17]. In [14], a photo-realistic video dataset is extracted from the GTA-V game along with pixelaccurate masks. In [16], 3D models are aligned with images from PASCAL 3D+ [36] and rendered along with their annotated 3D pose to obtain masks and amodal bounding boxes. In [8], the authors created DYCE by taking snapshots from 3D synthetic scenes [43]. While 3D content provides labels for “free" via rendering, it is not widely available and typically lacks diversity and realism. Simulated Data: A simple way to utilize real data annotated with instance (but not amodal) masks is by simulating occlusion, i.e., by overlaying objects on top of other objects [21, 37, 39]. One problem with this type of approach is that the composited images do not look natural and thus appearance-based models may not generalize well to real images. [37] created OVD (Occluded Vehicle Dataset) by randomly placing pedestrians and vehicles on base images and exploited the Deep Harmonization [35] technique to make synthetic images look natural. Our work, while also relying on occlusion simulation, does so only for object masks, ignoring appearance altogether. Our method can thus exploit either rendered masks, or masks from one dataset for use on another dataset. Methodology: In most prior work, labels, either real or synthetic, are used in a standard supervised framework. In [21], the authors perform amodal segmentation by iteratively expanding the bounding box around an instance mask based on heat intensity. [27] proposes an occlusion classification branch on top of RPN [28]. In addition to the standard mask prediction loss, [37] utilizes a discriminator loss to encourage amodal predictions to look more similar to amodal masks rendered via the Shapenet dataset [5]. Our work also bears similarity to the recent De-occlusion paper [39] due to the application to scene editing. However, the approaches for mask completion differ in methodology, where ours frames the problem probabilistically, while [39] is a deterministic method. Related to our work is [32, 38], where the authors train a VAE [20] to learn a 3D shape prior. This prior is then used to generate closed 3D meshes [32] from partial point cloud observations. Several unsupervised methods have also been proposed for amodal mask completion. Prior works treat amodal completion as a contour completion problem, usually recovered by minimizing shape energy. [18] uses Euler spirals, [30] exploits Contour-Completion Random Fields and [22] utilizes minimum Hamiltonian cycles and Bezier curves. However, most of these unsupervised methods focus on simple shapes and cannot easily be scaled to real world datasets. 3 Background and Problem Formulation In this section, we review Variational Autoencoders (VAEs) and we formally define the problem of amodal instance completion, which we are addressing. 3.1 Variational Autoencoders Given a dataset D = {yi} N i=1, the VAE framework enables us to learn a latent variable generative model p(y, z) = pw1(y|z)p(z), where p(z) is a prior distribution over latent variables and pw1(y|z) is a likelihood distribution, usually interpreted as a decoder and typically parametrized by a neural network with parameters w1 [20, 29]. Since the true posterior distribution p(z|y) is intractable, VAEs employ an auxiliary approximate posterior distribution or encoder qw2(z|y), parametrized by another neural network with parameters w2. When additional information about the data is available, such as the samples’ classes or categories c, the framework can be extended to conditional VAEs, in which the encoder, prior and decoder can be conditioned on this class information [19, 31]. VAEs are trained via variational inference, maximizing the Evidence Lower BOund (ELBO). Here, we consider the case in which only the encoder is conditioned on additional class information c that is available for all samples in the dataset D. The ELBO then is LVAE(w1,w2) = Ey,c∼D [ Ez∼qw2 (z|y,c) [log pw1(y|z)]− λDKL(qw2(z|y, c)‖p(z)) ] (1) When calculating gradients during training, the expectation over the data is estimated using minibatches and the expectation over the latent variables z is usually calculated using a single sample from the approximate posterior. Parameter updates are done with stochastic gradient descent, employing the re-parameterization trick [20, 29]. Due to the KL-regularization, the model learns to encode data y in an efficient low-dimensional latent representation z. Although strict variational inference corresponds to λ = 1, it has been shown that different values of λ allow us to carefully control the balance between the KL and the reconstruction terms [3, 40, 2, 13, 4], which can be beneficial. 3.2 Amodal Instance Completion Let D = {ŷi} N i=1, be a dataset of “partial” instance object masks ŷi ∈ Ŷ in images. We can define an Amodal Mask Completion method as a mapping f : Ŷ → Y with completed masks yi ∈ Y. In words, the amodal instance completion task recovers the occluded part of a particular object from the partially occluded instance mask. If available, we can use additional information in the function f , such as the images’ RGB pixel values or the instances’ classes ci, like in the VAE framework. Note that, formally, the set of realistic complete masks Y is a subset of all possible partial masks Ŷ. 4 Variational Object Completion A trivial solution to the task of Amodal Mask Completion would be collecting a training dataset Dtrain = {yi, ŷi} N i=1 consisting of paired partial masks ŷi and corresponding complete masks yi (and potentially additional information, such as instance classes ci). Then, we could fit a parametric model, i.e. a neural network, to it by treating it as an image segmentation problem. However, annotating an amodal dataset is challenging, time-consuming, expensive and sometimes ambiguous, as objects resulting from occlusions may not even be well-defined. The resulting annotations may vary from individual to individual, which could also make learning more difficult. Instead, we exploit a weakly-supervised approach, where we have access to data with only partially visible masks (Ŷ) and separate data with only full masks (Y). As shown in Figure 2, we are using a VAE framework, in which we first encode partially visible masks ŷ into a smooth latent space and then decode the resulting latent codes z into the full masks y. A crucial advantage of the probabilistic VAE-based framework is that it naturally captures the ambiguity when completing partial masks in its posterior distribution (see Fig 6). Furthermore, it also deals gracefully with inputs that it is uncertain about. Since the model is trained such that all points under the prior distribution map to realistic completed masks, slightly erroneous latent code predictions still decode into well-defined outputs. We denote our model as Amodal-VAE. Next, we present our Amodal-VAE and how we train it in order to overcome the previously discussed challenges in more detail. 4.1 Learning to Reconstruct Full Objects We start by presenting the high-level architecture of Amodal-VAE. For simplicity, we assume a factorial Normal prior distribution p(z) ∼ N (0, I) and factorial Normal approximate posteriors qw2(z|y, c) and q̂w3(z|y, c) with means and standard deviations parametrized via convolutional neural networks that also see the objects’ categories c, which are available in all datasets we are working with or can be predicted if necessary. The decoder pw1(y|z) is a factorial Bernoulli distribution, predicts binary masks, and is parametrized using a deconvolutional neural network (see supplementary material for details). To best leverage the two separate datasets Y with fully visible masks and Ŷ with partially visible masks, we train Amodal-VAE in three stages. (1) Full-Mask-only Training: We want Amodal-VAE to generate only realistic full masks, even when provided with partial masks that are significantly occluded as input. Hence, during the first step we focus on learning the generative component pw1(y|z)p(z) of the model and we train AmodalVAE on full masks only. Amodal-VAE is trained using the ELBO defined in Eq. 1 on Y. It learns low-dimensional representations of complete masks of real objects in its continuous latent space. (2) Simulated Partial-to-Full-Mask Training: After (1), any point in latent space under the prior maps to a realistic completed mask. Now, based on the full mask data, we simulate various occlusions, hence generating a synthetic dataset of paired partial and complete masks of the form Dtrain = {yi, ŷi} N i=1. Freezing the previously learnt decoder, i.e. the decoder pw1(y|z), we then learn a new encoder q̂w3(z|ŷ, c) with parameters w3 that maps partial masks ŷ to points in latent space z that decode into the correct completed masks y. For constructing the synthetic dataset, we sample random instances yforeground and yinstance from Y and mask out yinstance by randomly positioning yforeground in front of it, similar to [39]. We can now maximize the following adapted ELBO objective LAmodal-VAE(w3) = Eŷ,y,c∼Dtrain [ Ez∼q̂w3 (z|ŷ,c) [log pw1(y|z)]− λDKL(q̂w3(z|ŷ, c)‖p(z)) ] (2) where ŷ are the simulated partial masks, y are the full masks, and c is additional object class information. Notice that only the new encoder parameters w3 are optimized and that we do not use the RGB image information. The composition of the new encoder with the frozen decoder forms the amodal instance completion mapping, which we can formally express as f(ŷ, c) = pt w1 (q̂µ w3 (ŷ, c)), where we defined the deterministic functions q̂µ w3 (ŷ, c) as the mean of q̂w3(z|ŷ, c) and p t w1 (z) as the binary output mask calculated from pixelwise Bernoulli probabilities pw1(y|z) with threshold t. Intuitively, the first term in Equation 2 is the reconstruction loss that guides the encoder to find an appropriate position in the low dimensional Gaussian manifold which is decoded to Y. The second term, the KL loss, regularizes the new approximate posterior q̂w3(z|ŷ, c) to generate only encodings that fall under the prior distribution p(z). Because of the first training step and since we keep the decoder frozen, all such encodings z map to complete masks. To aid the new encoder to more easily search the latent space, we exploit an additional latent code distance loss. We pull encodings from complete and corresponding partial masks close to each other, since they both need to decode into the same full masks. We minimize the following loss: LLatentCode(w3) = Eŷ,y,c∼Dtrain [ Eẑ∼q̂w3 (z|ŷ,c),z∼q̂w3 (z|y,c) 1 2 [ẑ − z]2 ] , (3) for paired ŷ and y. We approximate the inner expectation using single samples from the approximate posteriors. We found adding this loss to the ELBO objective to slightly increase performance. However we found that it can’t replace the reconstruction loss. The final loss becomes: L(w3) = LLatentCode(w3) + LAmodal-VAE(w3) (4) (3) Partial-Mask-only Finetuning: In the third training stage, we “finetune” the Amodal-VAE by training its encoder in standard VAE-fashion using only partial masks from Ŷ, masking out all non-visible pixels. Finetuning the Amodal-VAE in this way helps the model to deal with complex realistic occlusions, which may not occur during the occlusion simulation in (2), for example since we only use single foreground instances to create simulated occlusions. The decoder remains frozen. For a partially visible mask ŷ, we define its visible pixels as ŷvis. We can define an ELBO as LFinetuning(w3) = Eŷ,c∼Ŷ [ Ez∼q̂w3 (z|ŷ,c) [log pw1(ŷ vis|z)]− λDKL(q̂w3(z|ŷ, c)‖p(z)) ] (5) where we consider only the reconstruction loss on the visible pixels. In training stages (2) and (3), we additionally apply a spatial transformer network on the output, that learns to resize the completed masks such that they can be pasted back into the scene (see Sec 4.2). Motivation: One may ask, why separate training stages (1) and (2)? When learning the actual amodal completion model in step (2), the approximate posterior sees different partial masks, which can look entirely different due to different simulated occlusions, but that nevertheless map to similar completed masks. Alternatively, similar partial masks may correspond to very different completed masks. Training on such data constitutes a very difficult and ambiguous learning problem, unlike regular VAE training. If the generative component, i.e. the decoder, was also trained like this, it would result in a weaker model encoding less information in latent space. Therefore, we found it to be beneficial to separately train the generative component in robust standard-VAE fashion with full masks only first and then freeze it. After all, we know that we want to generate only ever full masks. In other words, we are separating the difficulty of learning a high quality generative component from the difficulty of learning to map many different partial masks to similar completed masks and vice versa. Note that we also have to train the spatial transformer in step (2). It is easier to first learn the decoder on full masks only and then separately learn the spatial transformer on top of the “correct” decoder, instead of training both simultaneously. 4.2 Resizing Completed Masks with Spatial Transformers Both input and output of Amodal-VAE are tightly cropped 2D instance masks, separately resized or squeezed to the model’s fixed input and output dimensions. Therefore, the output masks are not in the same scale as the partial input masks. Because of that, we cannot simply resize and paste the completed masks back into the image. To overcome this hurdle, we learn an affine transformation that shifts and scales the output mask to correct for the discrepancy. The output mask can then be pasted back into the full image using the resizing and positioning of the partial input mask (see Fig 2). With an instance’s partial mask ŷ and completed mask y, generated by Amodal-VAE’s decoder in the VAE’s fixed output dimensions, we learn a spatial transformation function gθ(y, ŷ) → y ′ such that the transformed y′ is the completed mask in the same scale and at the same position as the input mask ŷ. Specifically, we first predict the transformation parameters (tx, ty, sx, sy) = gθ(y, ŷ) Aθ = [ sx 0 tx 0 sy ty ] (6) where gθ is a neural network and Aθ is a 2D affine transformation matrix that is applied to each pixel in y and used to do differentiable image sampling as defined in [15]. The transformation defined through gθ and Aθ is end-to-end differentiable and can be trained by backpropagation together with the Amodal-VAE. The spatial transformer function, operating on the Amodal-VAE output, is trained during training stages (2) and (3) (in training stage (1) we train on complete masks only). 5 Experiments We now extensively evaluate our Amodal-VAE and show its application to interactive scene editing. Please refer to supplementary material for training and model implementation details. Dataset: We focus on street scenes in this paper. KINS [27] is a large scale dataset derived from KITTI [10], which contains both instance and amodal annotations. The dataset consists of 7,474 images for training and 7,517 images for testing. There are 18,241 and 17,646 complete instances in the training and test set respectively. Following [39], we use the first ≈ 10% images from the test set as validation set (750 images in total). In this paper, we only exploit instance masks in training and amodal ground truth labels are only used for evaluation. The Cityscapes dataset [7] contains 5,000 images of driving scenes, including 2,975 images for training, 500 for validation, and 1,525 for testing. In the training set, 11,251 out of 52,469 instances are without occlusion. The instance masks in Cityscapes are finely annotated for the visible portions of the objects, however, no amodal annotations are available. In this paper, we treat Cityscapes as an additional dataset to test generalization of our approach. 5.1 Amodal Mask Completion Comparisons: We first benchmark our approach for the task of amodal mask completion. To compare with baseline models, we use the amodal completion setting introduced in [39], where at test time RGB images and ground truth (GT) instance masks are provided as input to our model. Since our model does not exploit specific foreground occlusion masks as input, we use the De-occlusion-NOG (no order grounding) setting as a baseline. The performance of our model on KINS is shown in Table 1. Because occluded regions are relatively small compared to full masks, the input instance masks have a high 87.03% mean Intersection over Union (mIOU) with the GT full masks. For this reason, we separately evaluate mIOU on the invisible area only as well. Results show that Amodal-VAE outperforms the state-of-the-art De-occlusion [39] model by 5.66% for invisible mIOU and 0.64% for full mIOU, which is a significant improvement. For another baseline experiment, we generate a synthetic dataset from the KINS training set. Using the full mask data, for each mask we simulate 5 different occlusions by randomly pasting another mask as foreground, hence generating a synthetic dataset of paired partial and complete masks consisting of 91,205 examples. We can now use a nearest neighbor-based approach for mask completion. We Method GT Crop mIOU Invis. mIOU Instance Mask ✗ 87.03 0 Nearest Neighbor Mask ✗ 93.71 54.97 De-occlusion ✗ 94.04 57.19 Amodal-VAE ✗ 94.68 62.85 RGB-Amodal-VAE ✗ 94.53 61.97 Amodal-VAE + GT Box X 97.64 82.30 Table 1: Amodal Completion on KINS. Invisible mIOU means we evaluate mIOU only on invisible areas. GT Crop denotes that input is cropped by GT amodal bounding box. Method Full mIOU Invisible mIOU Amodal-VAE 94.68 62.85 w/o. Full-Mask training 94.28 58.92 w/o. Simulated training 83.30 35.82 w/o. Likelihood training 94.04 57.04 - Latent Space L2 loss 94.02 56.90 w/o. Class Conditioning 93.56 53.03 Table 2: Ablation study of Amodal-VAE on KINS. compute the cosine similarity between an input partial mask and the synthetic partial masks and then use as output the full mask corresponding to the synthetic partial one with the highest similarity to the input. Results show that Amodal-VAE outperforms this baseline (Nearest Neighbor Mask in Table 1). We further ablate the use of the RGB information as additional input to the VAE. After the fullmask-only training stage, we use a ResNet-50 pretrained on ImageNet, which takes cropped RGB images as input, concatenate the ResNet’s features and the mask encoder output, add two further convolutional layers to merge the two, and predict the latent code posterior distribution. The ResNet is finetuned together with all other trainable parameters and we optimize the setup’s hyperparameters and report the best result. As shown in Table 1, line RGB-Amodal-VAE, the additional RGB-based image features do not boost performance. Hence, for our main Amodal-VAE model we discard the RGB input for simplicity. It is possible that a more carefully designed model architecture will be able to extract more useful information from the RGB input as the slight decrease in performance might seem counterintuitive, but we leave this for future research. In the experiments above, we always tightly crop the instance mask. However, in an interactive scene editing tool, users can be asked to provide the amodal box. Thus, we evaluate our method also by utilizing GT amodal bounding boxes, which precisely indicate the extent of the occluded area. In these experiments (Amodal-VAE + GT Box), we achieve 97.64% and 82.30% mIOU, respectively. This suggests that there is much room for improvement by better cropping the input masks automatically. Posterior sampling: To further motivate the use of a probabilistic model, we show quantitative results from multiple posterior predictions. For each partial mask instance, we sample 20 latent codes from the approximate posterior distribution and decode to the corresponding completed masks. We calculate mIOU using masks with the best visible area IOU or best amodal GT IOU. The results in Table 4 show that by sampling we find masks that match the amodal GT significantly better than using the approximate posterior mode. Hence, the approximate posterior incorporates diverse plausible masks, correctly capturing the ambiguity. Using samples from the full posterior distribution may benefit downstream applications. Additional results are provided in the supplementary material: We analyze approximate posterior widths as a function of occlusion ratio and we also show prior samples. Ablations: We first ablate the three training stages described in Sec. 4.1. The results in Table 2 show that the performance drops by 3.93% if we omit the first Full-Mask-only training stage. Furthermore, the model performs significantly worse without the second occlusion-simulation training stage, because this is where the model learns to actually map partial to full masks. Likelihood-based (i.e. using the ELBO) partial-mask-only finetuning as the third stage plays an important role, since it brings real occluded instances into the training loop. Also, conditioning on class information is crucial, as it helps the VAE to better infer the masks, especially when there is a large occlusion. Next, we conduct cross dataset evaluations. We train Amodal-VAE on the Cityscapes training set and evaluate on the KINS test set. Due to the mismatch in class categories across datasets, we merge the bus and car classes into one class, and motorcycle and bicycle classes into another. Results in Table 5 show cross domain stability of our model. We consistently outperform the De-occlusion baseline. Method GT Crop Full mIOU Invis. mIOU Amodal-VAE ✗ 93.72 56.18 De-occlusion ✗ 93.19 48.23 Table 5: Cross Domain Amodal Completion. Models are trained on Cityscapes and tested on KINS. Amodal-VAE GT-box Ground Truth No Preference 46.68 39.50 13.8 Table 6: User Study. We evaluate our model against human-annotated amodal masks in KINS via an Amazon Mechanical Turk user study. Interestingly, subjects prefer our object completions to the human-labeled ones. Qualitative results: We show qualitative results in Figure 3. We also compare to human-annotated masks in Figure 4. Our generated masks contain more details and look more natural than GT masks. We further show shape variations by sampling from the approximate posterior distribution in Figure 6 and Figure 7. Different plausible completions are drawn from a single partial mask. User study: We also evaluate our model against human-annotated amodal masks in KINS via an Amazon Mechanical Turk user study. We assume that the user draws the amodal box which is provided to Amodal-VAE. We randomly sampled 3260 instances from the KINS test set and asked Turkers to indicate preference between Amodal-VAE’s amodal masks and GT annotated amodal masks. Interestingly, as shown in Table 6, users prefer Amodal-VAE’s masks 46.68% of the time versus 39.50% for ground truth. This demonstrates that Amodal-VAE outperforms the drawing skills of the human annotators of the KINS dataset [27]. In the supplementary material, we provide additional results on amodal segmentation, where we first predict modal segmentation masks using a standard segmentation model, and then use Amodal-VAE to complete partial segmentation masks. 5.2 Object Manipulation Application Here, we apply the Amodal-VAE to interactive scene editing and report the results. Background and Instance Inpainting: Since Amodal-VAE can be used to predict complete instance masks for all objects in a scene, we can use these inferred masks to move or delete objects. Such operations will uncover previously occluded parts of the objects and the background. We complete the missing content using an inpainting neural network, which takes RGB images with missing content as input and generates a realistic completed output. Similar to [39], we are using the convolutional inpainting network from [25], which employs partial convolutions and nearest neighbor up-sampling in the decoding stage. Inpainting details are available in the supplementary material. We benchmark the performance of instance inpainting. Since we do not have any ground truth appearance for the invisible areas, we exploit Fréchet Inception Distance [12] (FID Score) to evaluate the inpainting results. FID is a measure of similarity between two datasets of images. It was shown to correlate well with human judgment of visual quality and is most often used to evaluate the quality of samples from Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network [33]. In our case, we use non-occluded instances in the KINS test set as a reference dataset. For each instance, we use Amodal-VAE and the inpainting network to complete the mask and appearance. We compute FID distances between the reference dataset and inpainting results based on predicted amodal masks. Intuitively, the better and more natural the amodal mask is, the lower the FID score should be. Our Amodal-VAE achieves 41.44 versus 50.36 for the baseline De-occlusion approach. Note that the inpainting networks we use for both methods are identical. We thus conclude that the amodal masks predicted by Amodal-VAE lead to more natural completions. Instance Manipulation: Furthermore, we show how we can change the pose of objects, even of those which are partially occluded. Since we are working with complex street scenes, we focus on cars for this demonstration. We exploit GauGAN [26], which can separately take into account local appearance and mask shape. We first infer an object’s complete shape and appearance as described above. Then, we use GauGAN’s encoder to infer its latent representation, which captures only local appearance information. When regenerating the image, we randomly sample complete shapes from the test set and feed them to the SPADE layers. Due to the separation of local appearance and global semantic information in GauGAN, the newly generated scene reflect any pose changes. Qualitative results: We first show the qualitative inpainting results in Table 3. Conditioned on the complete mask, the inpainting network recovers the invisible appearance successfully. We also deleted the foreground mask by another background inpainting module. In Figure 5, we showcase different functionalities in our interactive scene editing tool. Based on the amodal mask, our tool supports swapping order, deleting, moving, and scaling objects. We also showcase how we can change the pose of objects by utilizing GauGAN as described. 6 Conclusions In this work, we propose Amodal-VAE, a simple probabilistic method for amodal instance completion, which does not require amodal labels for training. In particular, our method is based on a variational autoencoder that learns to reconstruct full object masks from partially occluded ones by using a carefully designed training strategy. This exploits both full and partial instances available in existing segmentation datasets. We quantitatively and qualitatively showcase the performance of our method on the downstream task of scene editing of complex street scenes. Our experiments show significant improvement over the recently proposed state-of-the-art method. We provide our method as an interactive image editing tool where users can remove, move, or swap different objects in the image. Note that training Amodal-VAE requires a high quality dataset with complete masks and each category must contain a sufficient number of objects. Therefore, in this work we focus on driving scenes which contain mainly rigid objects and for which sufficient data is available. Applying our model on more complex scenes and in a setting with limited data is left for future work. 7 Broader Impact Our proposed model can be used in a wide range of applications that require reasoning on occluded objects. These include planning tasks in robotics, object tracking, and editing a photo or video. We focus on two significant impacts of using our model. The first is in the context of autonomous driving. An autonomous driving car must infer the geometry and identity of surrounding objects for its decision-making process. Partially visible objects could lead to wrong estimates for motion planning, and thus reasoning about the full extent of objects can lead to much safer control. Our approach infers the complete shapes of the occluded objects for this purpose. The other major impact is on augmented reality. One could use our technology to snap a photograph of their environment, and "delete" existing objects from the photograph, replacing them with alternatives. The crux of our approach is in deleting content from an image, which could be subject to misuse. We encourage work on detecting fakes as the standard technology to deal with image manipulation approaches. Acknowledgments and Disclosure of Funding This work was fully funded by NVIDIA and no third-party funding was used.
1. What is the focus and contribution of the paper regarding VAE for amodal object completion? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its novelty and comparisons with other works? 3. Do you have any questions or concerns regarding the method's application and potential for future research? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes a method based on VAE for amodal object completion without ground truth amodal segmentation annotation (achieve via occlusion simulation). The paper also demonstrates application including impainting and instance manipulation. Strengths The paper proposes a sound method based on established VAE model for occluded shape encoding and full shape generation. Although the VAE modeling itself is not new (similar to [30]), the paper is able to demonstrate its applicability to the domain of shape segmentation via both qualitative and quantitative evaluation, as well as downstream applications. Weaknesses [1] Limited novelty. As mentioned above, the key components of the method has mostly been covered in [30] with the major difference being the **domains** both papers address (3D shape completion for [30] and instance segmentation completion for this paper). That being said, the theoretical contribution of this paper is limited. In addition, the occulusion simulation as a way to generate paired occluded shape-full shape is already well utilized. [2] Lack of comparison to the baseline. There is only one baseline that is compared against in the paper, which is limited. Also a lack of qualitative comparison between the method and the baseline weakens the conclusion that the method is superior not only numerically but also visually. Also the paper only evaluates on the car category of KITTI. Is it possible to evaluate on multiple categories of different amodal shape datasets? As we know the car category is a relative easy one, as cars shapes do not share a lot intra-class variation compared to some other categories. For example, as the method only uses occluded shape as input, it is possible to create synthetic datasets from ShapeNet by rendering into 2D silhouettes, similar to [30]? [3] Further clarification. How are the spatial transformation network learned? What is the supervision? [4] Although demonstrating the downstream application of the method in Section 5.1 is much appreciated as it provides insights into how useful the model is, this section is not very relevant to the central task -- amodal mask completion, because such applications can be done on all methods in this task, and a lack of comparison with baseline method had failed to demonstrate how the proposed method overtakes prior arts in those applications.
NIPS
Title Variational Amodal Object Completion Abstract In images of complex scenes, objects are often occluding each other which makes perception tasks such as object detection and tracking, or robotic control tasks such as planning, challenging. To facilitate downstream tasks, it is thus important to reason about the full extent of objects, i.e., seeing behind occlusion, typically referred to as amodal instance completion. In this paper, we propose a variational generative framework for amodal completion, referred to as Amodal-VAE, which does not require any amodal labels at training time, as it is able to utilize widely available object instance masks. We showcase our approach on the downstream task of scene editing where the user is presented with interactive tools to complete and erase objects in photographs. Experiments on complex street scenes demonstrate state-of-the-art performance in amodal mask completion, and showcase high quality scene editing results. Interestingly, a user study shows that humans prefer object completions inferred by our model to the human-labeled ones. 1 Introduction One of the most remarkable properties of the human visual system is the ability to rapidly recognize objects and understand their spatial extent in complex visual scenes, even when objects are barely visible due to occlusion [9, 42]. This is important, as it allows humans to more accurately anticipate what can happen a few moments into the future, and plan accordingly. We expect such a capability to also benefit robotic systems. Reasoning about objects and their extent is also key in other contexts, for example, in semantic image editing tasks. Imagine a user that wants to erase an object from a photograph, and possibly even manipulate objects that are partially hidden behind it. To do this, an A.I. system needs to be able to “complete” the occluded objects in the scene, both in their spatial extent, i.e., their masks, as well as in appearance. This problem is typically referred to as amodal instance completion, and is an important component of many applications. However, most research in the domain of semantic segmentation, has focused on the “modal” perception of the scene [6, 11, 34], i.e., segmenting visible pixels of the objects, for which large-scale annotated datasets are available [7, 23, 41]. The lack of labeled data for amodal segmentation is likely due to the difficulty and ambiguity of the annotation task. Amodal annotation of occluded objects requires a human labeler to draw an imagined contour rather than tracing a visible contour in an image, which requires drawing skills that not all annotators possess. In cases where objects are highly occluded there may also be multiple valid hypotheses for a plausible completion. In this work, we propose a variational generative framework for amodal instance completion, called Amodal-VAE. It does not require amodal labels at training time, and exploits instance masks of visible parts of the objects that are widely available in current datasets. Our approach learns to reconstruct full objects from partial masks by training a variational autoencoder in carefully designed 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. stages that allow us to model the complete mask with a low-dimensional latent representation. The probabilistic framework naturally incorporates the ambiguity in the mask completion task, and is able to produce multiple plausible completions which existing work cannot. We showcase our approach on the downstream task of scene editing where the user is presented with interactive tools to complete and erase objects in an image. Experiments demonstrate significant improvements over the recently released state-of-the-art approach [39]. A user study further reveals that participants strongly prefer amodal masks produced by our model over the human-annotated amodal masks. 2 Related Work We focus our review on amodal mask completion which is the primary contribution of our work. The task of amodal instance segmentation aims at segmenting both visible and occluded parts of an object instance. This is in contrast to traditional semantic segmentation [34, 6] or instance segmentation tasks [11, 28, 1, 24], which aim to segment only the visible pixels of an object. Prior work usually decomposes amodal instance segmentation into instance segmentation and amodal mask completion. Supervision is needed for both stages, typically resorting to either synthetic datasets or human-provided labels, which we discuss below. Real Datasets: Recently, human-labeled real datasets have been collected for amodal instance segmentation. Authors extended KITTI [10] to create KINS [27], and COCO [23] to create COCOA [42]. However, there is little available labeled data, in part due to the ambiguity of the labeling task. Synthetic Datasets: One plausible way to get amodal labels is to exploit graphics renderers [14, 43, 17]. In [14], a photo-realistic video dataset is extracted from the GTA-V game along with pixelaccurate masks. In [16], 3D models are aligned with images from PASCAL 3D+ [36] and rendered along with their annotated 3D pose to obtain masks and amodal bounding boxes. In [8], the authors created DYCE by taking snapshots from 3D synthetic scenes [43]. While 3D content provides labels for “free" via rendering, it is not widely available and typically lacks diversity and realism. Simulated Data: A simple way to utilize real data annotated with instance (but not amodal) masks is by simulating occlusion, i.e., by overlaying objects on top of other objects [21, 37, 39]. One problem with this type of approach is that the composited images do not look natural and thus appearance-based models may not generalize well to real images. [37] created OVD (Occluded Vehicle Dataset) by randomly placing pedestrians and vehicles on base images and exploited the Deep Harmonization [35] technique to make synthetic images look natural. Our work, while also relying on occlusion simulation, does so only for object masks, ignoring appearance altogether. Our method can thus exploit either rendered masks, or masks from one dataset for use on another dataset. Methodology: In most prior work, labels, either real or synthetic, are used in a standard supervised framework. In [21], the authors perform amodal segmentation by iteratively expanding the bounding box around an instance mask based on heat intensity. [27] proposes an occlusion classification branch on top of RPN [28]. In addition to the standard mask prediction loss, [37] utilizes a discriminator loss to encourage amodal predictions to look more similar to amodal masks rendered via the Shapenet dataset [5]. Our work also bears similarity to the recent De-occlusion paper [39] due to the application to scene editing. However, the approaches for mask completion differ in methodology, where ours frames the problem probabilistically, while [39] is a deterministic method. Related to our work is [32, 38], where the authors train a VAE [20] to learn a 3D shape prior. This prior is then used to generate closed 3D meshes [32] from partial point cloud observations. Several unsupervised methods have also been proposed for amodal mask completion. Prior works treat amodal completion as a contour completion problem, usually recovered by minimizing shape energy. [18] uses Euler spirals, [30] exploits Contour-Completion Random Fields and [22] utilizes minimum Hamiltonian cycles and Bezier curves. However, most of these unsupervised methods focus on simple shapes and cannot easily be scaled to real world datasets. 3 Background and Problem Formulation In this section, we review Variational Autoencoders (VAEs) and we formally define the problem of amodal instance completion, which we are addressing. 3.1 Variational Autoencoders Given a dataset D = {yi} N i=1, the VAE framework enables us to learn a latent variable generative model p(y, z) = pw1(y|z)p(z), where p(z) is a prior distribution over latent variables and pw1(y|z) is a likelihood distribution, usually interpreted as a decoder and typically parametrized by a neural network with parameters w1 [20, 29]. Since the true posterior distribution p(z|y) is intractable, VAEs employ an auxiliary approximate posterior distribution or encoder qw2(z|y), parametrized by another neural network with parameters w2. When additional information about the data is available, such as the samples’ classes or categories c, the framework can be extended to conditional VAEs, in which the encoder, prior and decoder can be conditioned on this class information [19, 31]. VAEs are trained via variational inference, maximizing the Evidence Lower BOund (ELBO). Here, we consider the case in which only the encoder is conditioned on additional class information c that is available for all samples in the dataset D. The ELBO then is LVAE(w1,w2) = Ey,c∼D [ Ez∼qw2 (z|y,c) [log pw1(y|z)]− λDKL(qw2(z|y, c)‖p(z)) ] (1) When calculating gradients during training, the expectation over the data is estimated using minibatches and the expectation over the latent variables z is usually calculated using a single sample from the approximate posterior. Parameter updates are done with stochastic gradient descent, employing the re-parameterization trick [20, 29]. Due to the KL-regularization, the model learns to encode data y in an efficient low-dimensional latent representation z. Although strict variational inference corresponds to λ = 1, it has been shown that different values of λ allow us to carefully control the balance between the KL and the reconstruction terms [3, 40, 2, 13, 4], which can be beneficial. 3.2 Amodal Instance Completion Let D = {ŷi} N i=1, be a dataset of “partial” instance object masks ŷi ∈ Ŷ in images. We can define an Amodal Mask Completion method as a mapping f : Ŷ → Y with completed masks yi ∈ Y. In words, the amodal instance completion task recovers the occluded part of a particular object from the partially occluded instance mask. If available, we can use additional information in the function f , such as the images’ RGB pixel values or the instances’ classes ci, like in the VAE framework. Note that, formally, the set of realistic complete masks Y is a subset of all possible partial masks Ŷ. 4 Variational Object Completion A trivial solution to the task of Amodal Mask Completion would be collecting a training dataset Dtrain = {yi, ŷi} N i=1 consisting of paired partial masks ŷi and corresponding complete masks yi (and potentially additional information, such as instance classes ci). Then, we could fit a parametric model, i.e. a neural network, to it by treating it as an image segmentation problem. However, annotating an amodal dataset is challenging, time-consuming, expensive and sometimes ambiguous, as objects resulting from occlusions may not even be well-defined. The resulting annotations may vary from individual to individual, which could also make learning more difficult. Instead, we exploit a weakly-supervised approach, where we have access to data with only partially visible masks (Ŷ) and separate data with only full masks (Y). As shown in Figure 2, we are using a VAE framework, in which we first encode partially visible masks ŷ into a smooth latent space and then decode the resulting latent codes z into the full masks y. A crucial advantage of the probabilistic VAE-based framework is that it naturally captures the ambiguity when completing partial masks in its posterior distribution (see Fig 6). Furthermore, it also deals gracefully with inputs that it is uncertain about. Since the model is trained such that all points under the prior distribution map to realistic completed masks, slightly erroneous latent code predictions still decode into well-defined outputs. We denote our model as Amodal-VAE. Next, we present our Amodal-VAE and how we train it in order to overcome the previously discussed challenges in more detail. 4.1 Learning to Reconstruct Full Objects We start by presenting the high-level architecture of Amodal-VAE. For simplicity, we assume a factorial Normal prior distribution p(z) ∼ N (0, I) and factorial Normal approximate posteriors qw2(z|y, c) and q̂w3(z|y, c) with means and standard deviations parametrized via convolutional neural networks that also see the objects’ categories c, which are available in all datasets we are working with or can be predicted if necessary. The decoder pw1(y|z) is a factorial Bernoulli distribution, predicts binary masks, and is parametrized using a deconvolutional neural network (see supplementary material for details). To best leverage the two separate datasets Y with fully visible masks and Ŷ with partially visible masks, we train Amodal-VAE in three stages. (1) Full-Mask-only Training: We want Amodal-VAE to generate only realistic full masks, even when provided with partial masks that are significantly occluded as input. Hence, during the first step we focus on learning the generative component pw1(y|z)p(z) of the model and we train AmodalVAE on full masks only. Amodal-VAE is trained using the ELBO defined in Eq. 1 on Y. It learns low-dimensional representations of complete masks of real objects in its continuous latent space. (2) Simulated Partial-to-Full-Mask Training: After (1), any point in latent space under the prior maps to a realistic completed mask. Now, based on the full mask data, we simulate various occlusions, hence generating a synthetic dataset of paired partial and complete masks of the form Dtrain = {yi, ŷi} N i=1. Freezing the previously learnt decoder, i.e. the decoder pw1(y|z), we then learn a new encoder q̂w3(z|ŷ, c) with parameters w3 that maps partial masks ŷ to points in latent space z that decode into the correct completed masks y. For constructing the synthetic dataset, we sample random instances yforeground and yinstance from Y and mask out yinstance by randomly positioning yforeground in front of it, similar to [39]. We can now maximize the following adapted ELBO objective LAmodal-VAE(w3) = Eŷ,y,c∼Dtrain [ Ez∼q̂w3 (z|ŷ,c) [log pw1(y|z)]− λDKL(q̂w3(z|ŷ, c)‖p(z)) ] (2) where ŷ are the simulated partial masks, y are the full masks, and c is additional object class information. Notice that only the new encoder parameters w3 are optimized and that we do not use the RGB image information. The composition of the new encoder with the frozen decoder forms the amodal instance completion mapping, which we can formally express as f(ŷ, c) = pt w1 (q̂µ w3 (ŷ, c)), where we defined the deterministic functions q̂µ w3 (ŷ, c) as the mean of q̂w3(z|ŷ, c) and p t w1 (z) as the binary output mask calculated from pixelwise Bernoulli probabilities pw1(y|z) with threshold t. Intuitively, the first term in Equation 2 is the reconstruction loss that guides the encoder to find an appropriate position in the low dimensional Gaussian manifold which is decoded to Y. The second term, the KL loss, regularizes the new approximate posterior q̂w3(z|ŷ, c) to generate only encodings that fall under the prior distribution p(z). Because of the first training step and since we keep the decoder frozen, all such encodings z map to complete masks. To aid the new encoder to more easily search the latent space, we exploit an additional latent code distance loss. We pull encodings from complete and corresponding partial masks close to each other, since they both need to decode into the same full masks. We minimize the following loss: LLatentCode(w3) = Eŷ,y,c∼Dtrain [ Eẑ∼q̂w3 (z|ŷ,c),z∼q̂w3 (z|y,c) 1 2 [ẑ − z]2 ] , (3) for paired ŷ and y. We approximate the inner expectation using single samples from the approximate posteriors. We found adding this loss to the ELBO objective to slightly increase performance. However we found that it can’t replace the reconstruction loss. The final loss becomes: L(w3) = LLatentCode(w3) + LAmodal-VAE(w3) (4) (3) Partial-Mask-only Finetuning: In the third training stage, we “finetune” the Amodal-VAE by training its encoder in standard VAE-fashion using only partial masks from Ŷ, masking out all non-visible pixels. Finetuning the Amodal-VAE in this way helps the model to deal with complex realistic occlusions, which may not occur during the occlusion simulation in (2), for example since we only use single foreground instances to create simulated occlusions. The decoder remains frozen. For a partially visible mask ŷ, we define its visible pixels as ŷvis. We can define an ELBO as LFinetuning(w3) = Eŷ,c∼Ŷ [ Ez∼q̂w3 (z|ŷ,c) [log pw1(ŷ vis|z)]− λDKL(q̂w3(z|ŷ, c)‖p(z)) ] (5) where we consider only the reconstruction loss on the visible pixels. In training stages (2) and (3), we additionally apply a spatial transformer network on the output, that learns to resize the completed masks such that they can be pasted back into the scene (see Sec 4.2). Motivation: One may ask, why separate training stages (1) and (2)? When learning the actual amodal completion model in step (2), the approximate posterior sees different partial masks, which can look entirely different due to different simulated occlusions, but that nevertheless map to similar completed masks. Alternatively, similar partial masks may correspond to very different completed masks. Training on such data constitutes a very difficult and ambiguous learning problem, unlike regular VAE training. If the generative component, i.e. the decoder, was also trained like this, it would result in a weaker model encoding less information in latent space. Therefore, we found it to be beneficial to separately train the generative component in robust standard-VAE fashion with full masks only first and then freeze it. After all, we know that we want to generate only ever full masks. In other words, we are separating the difficulty of learning a high quality generative component from the difficulty of learning to map many different partial masks to similar completed masks and vice versa. Note that we also have to train the spatial transformer in step (2). It is easier to first learn the decoder on full masks only and then separately learn the spatial transformer on top of the “correct” decoder, instead of training both simultaneously. 4.2 Resizing Completed Masks with Spatial Transformers Both input and output of Amodal-VAE are tightly cropped 2D instance masks, separately resized or squeezed to the model’s fixed input and output dimensions. Therefore, the output masks are not in the same scale as the partial input masks. Because of that, we cannot simply resize and paste the completed masks back into the image. To overcome this hurdle, we learn an affine transformation that shifts and scales the output mask to correct for the discrepancy. The output mask can then be pasted back into the full image using the resizing and positioning of the partial input mask (see Fig 2). With an instance’s partial mask ŷ and completed mask y, generated by Amodal-VAE’s decoder in the VAE’s fixed output dimensions, we learn a spatial transformation function gθ(y, ŷ) → y ′ such that the transformed y′ is the completed mask in the same scale and at the same position as the input mask ŷ. Specifically, we first predict the transformation parameters (tx, ty, sx, sy) = gθ(y, ŷ) Aθ = [ sx 0 tx 0 sy ty ] (6) where gθ is a neural network and Aθ is a 2D affine transformation matrix that is applied to each pixel in y and used to do differentiable image sampling as defined in [15]. The transformation defined through gθ and Aθ is end-to-end differentiable and can be trained by backpropagation together with the Amodal-VAE. The spatial transformer function, operating on the Amodal-VAE output, is trained during training stages (2) and (3) (in training stage (1) we train on complete masks only). 5 Experiments We now extensively evaluate our Amodal-VAE and show its application to interactive scene editing. Please refer to supplementary material for training and model implementation details. Dataset: We focus on street scenes in this paper. KINS [27] is a large scale dataset derived from KITTI [10], which contains both instance and amodal annotations. The dataset consists of 7,474 images for training and 7,517 images for testing. There are 18,241 and 17,646 complete instances in the training and test set respectively. Following [39], we use the first ≈ 10% images from the test set as validation set (750 images in total). In this paper, we only exploit instance masks in training and amodal ground truth labels are only used for evaluation. The Cityscapes dataset [7] contains 5,000 images of driving scenes, including 2,975 images for training, 500 for validation, and 1,525 for testing. In the training set, 11,251 out of 52,469 instances are without occlusion. The instance masks in Cityscapes are finely annotated for the visible portions of the objects, however, no amodal annotations are available. In this paper, we treat Cityscapes as an additional dataset to test generalization of our approach. 5.1 Amodal Mask Completion Comparisons: We first benchmark our approach for the task of amodal mask completion. To compare with baseline models, we use the amodal completion setting introduced in [39], where at test time RGB images and ground truth (GT) instance masks are provided as input to our model. Since our model does not exploit specific foreground occlusion masks as input, we use the De-occlusion-NOG (no order grounding) setting as a baseline. The performance of our model on KINS is shown in Table 1. Because occluded regions are relatively small compared to full masks, the input instance masks have a high 87.03% mean Intersection over Union (mIOU) with the GT full masks. For this reason, we separately evaluate mIOU on the invisible area only as well. Results show that Amodal-VAE outperforms the state-of-the-art De-occlusion [39] model by 5.66% for invisible mIOU and 0.64% for full mIOU, which is a significant improvement. For another baseline experiment, we generate a synthetic dataset from the KINS training set. Using the full mask data, for each mask we simulate 5 different occlusions by randomly pasting another mask as foreground, hence generating a synthetic dataset of paired partial and complete masks consisting of 91,205 examples. We can now use a nearest neighbor-based approach for mask completion. We Method GT Crop mIOU Invis. mIOU Instance Mask ✗ 87.03 0 Nearest Neighbor Mask ✗ 93.71 54.97 De-occlusion ✗ 94.04 57.19 Amodal-VAE ✗ 94.68 62.85 RGB-Amodal-VAE ✗ 94.53 61.97 Amodal-VAE + GT Box X 97.64 82.30 Table 1: Amodal Completion on KINS. Invisible mIOU means we evaluate mIOU only on invisible areas. GT Crop denotes that input is cropped by GT amodal bounding box. Method Full mIOU Invisible mIOU Amodal-VAE 94.68 62.85 w/o. Full-Mask training 94.28 58.92 w/o. Simulated training 83.30 35.82 w/o. Likelihood training 94.04 57.04 - Latent Space L2 loss 94.02 56.90 w/o. Class Conditioning 93.56 53.03 Table 2: Ablation study of Amodal-VAE on KINS. compute the cosine similarity between an input partial mask and the synthetic partial masks and then use as output the full mask corresponding to the synthetic partial one with the highest similarity to the input. Results show that Amodal-VAE outperforms this baseline (Nearest Neighbor Mask in Table 1). We further ablate the use of the RGB information as additional input to the VAE. After the fullmask-only training stage, we use a ResNet-50 pretrained on ImageNet, which takes cropped RGB images as input, concatenate the ResNet’s features and the mask encoder output, add two further convolutional layers to merge the two, and predict the latent code posterior distribution. The ResNet is finetuned together with all other trainable parameters and we optimize the setup’s hyperparameters and report the best result. As shown in Table 1, line RGB-Amodal-VAE, the additional RGB-based image features do not boost performance. Hence, for our main Amodal-VAE model we discard the RGB input for simplicity. It is possible that a more carefully designed model architecture will be able to extract more useful information from the RGB input as the slight decrease in performance might seem counterintuitive, but we leave this for future research. In the experiments above, we always tightly crop the instance mask. However, in an interactive scene editing tool, users can be asked to provide the amodal box. Thus, we evaluate our method also by utilizing GT amodal bounding boxes, which precisely indicate the extent of the occluded area. In these experiments (Amodal-VAE + GT Box), we achieve 97.64% and 82.30% mIOU, respectively. This suggests that there is much room for improvement by better cropping the input masks automatically. Posterior sampling: To further motivate the use of a probabilistic model, we show quantitative results from multiple posterior predictions. For each partial mask instance, we sample 20 latent codes from the approximate posterior distribution and decode to the corresponding completed masks. We calculate mIOU using masks with the best visible area IOU or best amodal GT IOU. The results in Table 4 show that by sampling we find masks that match the amodal GT significantly better than using the approximate posterior mode. Hence, the approximate posterior incorporates diverse plausible masks, correctly capturing the ambiguity. Using samples from the full posterior distribution may benefit downstream applications. Additional results are provided in the supplementary material: We analyze approximate posterior widths as a function of occlusion ratio and we also show prior samples. Ablations: We first ablate the three training stages described in Sec. 4.1. The results in Table 2 show that the performance drops by 3.93% if we omit the first Full-Mask-only training stage. Furthermore, the model performs significantly worse without the second occlusion-simulation training stage, because this is where the model learns to actually map partial to full masks. Likelihood-based (i.e. using the ELBO) partial-mask-only finetuning as the third stage plays an important role, since it brings real occluded instances into the training loop. Also, conditioning on class information is crucial, as it helps the VAE to better infer the masks, especially when there is a large occlusion. Next, we conduct cross dataset evaluations. We train Amodal-VAE on the Cityscapes training set and evaluate on the KINS test set. Due to the mismatch in class categories across datasets, we merge the bus and car classes into one class, and motorcycle and bicycle classes into another. Results in Table 5 show cross domain stability of our model. We consistently outperform the De-occlusion baseline. Method GT Crop Full mIOU Invis. mIOU Amodal-VAE ✗ 93.72 56.18 De-occlusion ✗ 93.19 48.23 Table 5: Cross Domain Amodal Completion. Models are trained on Cityscapes and tested on KINS. Amodal-VAE GT-box Ground Truth No Preference 46.68 39.50 13.8 Table 6: User Study. We evaluate our model against human-annotated amodal masks in KINS via an Amazon Mechanical Turk user study. Interestingly, subjects prefer our object completions to the human-labeled ones. Qualitative results: We show qualitative results in Figure 3. We also compare to human-annotated masks in Figure 4. Our generated masks contain more details and look more natural than GT masks. We further show shape variations by sampling from the approximate posterior distribution in Figure 6 and Figure 7. Different plausible completions are drawn from a single partial mask. User study: We also evaluate our model against human-annotated amodal masks in KINS via an Amazon Mechanical Turk user study. We assume that the user draws the amodal box which is provided to Amodal-VAE. We randomly sampled 3260 instances from the KINS test set and asked Turkers to indicate preference between Amodal-VAE’s amodal masks and GT annotated amodal masks. Interestingly, as shown in Table 6, users prefer Amodal-VAE’s masks 46.68% of the time versus 39.50% for ground truth. This demonstrates that Amodal-VAE outperforms the drawing skills of the human annotators of the KINS dataset [27]. In the supplementary material, we provide additional results on amodal segmentation, where we first predict modal segmentation masks using a standard segmentation model, and then use Amodal-VAE to complete partial segmentation masks. 5.2 Object Manipulation Application Here, we apply the Amodal-VAE to interactive scene editing and report the results. Background and Instance Inpainting: Since Amodal-VAE can be used to predict complete instance masks for all objects in a scene, we can use these inferred masks to move or delete objects. Such operations will uncover previously occluded parts of the objects and the background. We complete the missing content using an inpainting neural network, which takes RGB images with missing content as input and generates a realistic completed output. Similar to [39], we are using the convolutional inpainting network from [25], which employs partial convolutions and nearest neighbor up-sampling in the decoding stage. Inpainting details are available in the supplementary material. We benchmark the performance of instance inpainting. Since we do not have any ground truth appearance for the invisible areas, we exploit Fréchet Inception Distance [12] (FID Score) to evaluate the inpainting results. FID is a measure of similarity between two datasets of images. It was shown to correlate well with human judgment of visual quality and is most often used to evaluate the quality of samples from Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network [33]. In our case, we use non-occluded instances in the KINS test set as a reference dataset. For each instance, we use Amodal-VAE and the inpainting network to complete the mask and appearance. We compute FID distances between the reference dataset and inpainting results based on predicted amodal masks. Intuitively, the better and more natural the amodal mask is, the lower the FID score should be. Our Amodal-VAE achieves 41.44 versus 50.36 for the baseline De-occlusion approach. Note that the inpainting networks we use for both methods are identical. We thus conclude that the amodal masks predicted by Amodal-VAE lead to more natural completions. Instance Manipulation: Furthermore, we show how we can change the pose of objects, even of those which are partially occluded. Since we are working with complex street scenes, we focus on cars for this demonstration. We exploit GauGAN [26], which can separately take into account local appearance and mask shape. We first infer an object’s complete shape and appearance as described above. Then, we use GauGAN’s encoder to infer its latent representation, which captures only local appearance information. When regenerating the image, we randomly sample complete shapes from the test set and feed them to the SPADE layers. Due to the separation of local appearance and global semantic information in GauGAN, the newly generated scene reflect any pose changes. Qualitative results: We first show the qualitative inpainting results in Table 3. Conditioned on the complete mask, the inpainting network recovers the invisible appearance successfully. We also deleted the foreground mask by another background inpainting module. In Figure 5, we showcase different functionalities in our interactive scene editing tool. Based on the amodal mask, our tool supports swapping order, deleting, moving, and scaling objects. We also showcase how we can change the pose of objects by utilizing GauGAN as described. 6 Conclusions In this work, we propose Amodal-VAE, a simple probabilistic method for amodal instance completion, which does not require amodal labels for training. In particular, our method is based on a variational autoencoder that learns to reconstruct full object masks from partially occluded ones by using a carefully designed training strategy. This exploits both full and partial instances available in existing segmentation datasets. We quantitatively and qualitatively showcase the performance of our method on the downstream task of scene editing of complex street scenes. Our experiments show significant improvement over the recently proposed state-of-the-art method. We provide our method as an interactive image editing tool where users can remove, move, or swap different objects in the image. Note that training Amodal-VAE requires a high quality dataset with complete masks and each category must contain a sufficient number of objects. Therefore, in this work we focus on driving scenes which contain mainly rigid objects and for which sufficient data is available. Applying our model on more complex scenes and in a setting with limited data is left for future work. 7 Broader Impact Our proposed model can be used in a wide range of applications that require reasoning on occluded objects. These include planning tasks in robotics, object tracking, and editing a photo or video. We focus on two significant impacts of using our model. The first is in the context of autonomous driving. An autonomous driving car must infer the geometry and identity of surrounding objects for its decision-making process. Partially visible objects could lead to wrong estimates for motion planning, and thus reasoning about the full extent of objects can lead to much safer control. Our approach infers the complete shapes of the occluded objects for this purpose. The other major impact is on augmented reality. One could use our technology to snap a photograph of their environment, and "delete" existing objects from the photograph, replacing them with alternatives. The crux of our approach is in deleting content from an image, which could be subject to misuse. We encourage work on detecting fakes as the standard technology to deal with image manipulation approaches. Acknowledgments and Disclosure of Funding This work was fully funded by NVIDIA and no third-party funding was used.
1. What is the focus and contribution of the paper regarding amodal object mask completion? 2. What are the strengths of the proposed approach, particularly in its experimental evaluation? 3. What are the weaknesses of the paper, especially regarding its novelty and application? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any concerns or questions regarding the paper's methodology or results?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper studies the task of amodal object mask completion. Given the mask of an occluded object, the paper learns a network to complete the mask. To account for the ambiguity in such mask completion, the paper pursues a variational approach. The approach is trained in a stage-wise manner on a combination of occluded and un-occluded object masks. The paper conducts evaluation on the KINS dataset, and shows the effectiveness of the various proposed components, as well as past methods on this task. Strengths The paper is well-written. Experimental evaluation uses latest datasets for the problem being studied. Weaknesses 1. Limited novelty, The paper applies and adapts standard ideas in the field for the application of amodal mask completion. As such, the paper has limited novelty. Furthermore, the application being considered is only mildly interesting (more on this in the next point). 2. While amodal instance completion (predicting the occluded pixels) is an interesting problem. The problem of predicting the amodal instance segmentation from partially occluded object is also interesting. The problem of predicting the full mask from a partial mask, in my view, is less interesting. All applications that the paper considers, have the appearance of the occluded object available, and it is a natural question as to why that information is being thrown away. Thus, I find the application inadequately motivated. In fact, if the appearance in the occluded mask is taken into account, it is also possible that the problem in fact simplifies -- there is likely less ambiguity when appearance is taken into account, than when it is not. 3. No quantitative evaluation for prediction of different modes. The paper motivated the use of a variational approach, so as to be able to make multiple predictions in case of ambiguities. The paper only presents a few qualitative examples.
NIPS
Title Variational Amodal Object Completion Abstract In images of complex scenes, objects are often occluding each other which makes perception tasks such as object detection and tracking, or robotic control tasks such as planning, challenging. To facilitate downstream tasks, it is thus important to reason about the full extent of objects, i.e., seeing behind occlusion, typically referred to as amodal instance completion. In this paper, we propose a variational generative framework for amodal completion, referred to as Amodal-VAE, which does not require any amodal labels at training time, as it is able to utilize widely available object instance masks. We showcase our approach on the downstream task of scene editing where the user is presented with interactive tools to complete and erase objects in photographs. Experiments on complex street scenes demonstrate state-of-the-art performance in amodal mask completion, and showcase high quality scene editing results. Interestingly, a user study shows that humans prefer object completions inferred by our model to the human-labeled ones. 1 Introduction One of the most remarkable properties of the human visual system is the ability to rapidly recognize objects and understand their spatial extent in complex visual scenes, even when objects are barely visible due to occlusion [9, 42]. This is important, as it allows humans to more accurately anticipate what can happen a few moments into the future, and plan accordingly. We expect such a capability to also benefit robotic systems. Reasoning about objects and their extent is also key in other contexts, for example, in semantic image editing tasks. Imagine a user that wants to erase an object from a photograph, and possibly even manipulate objects that are partially hidden behind it. To do this, an A.I. system needs to be able to “complete” the occluded objects in the scene, both in their spatial extent, i.e., their masks, as well as in appearance. This problem is typically referred to as amodal instance completion, and is an important component of many applications. However, most research in the domain of semantic segmentation, has focused on the “modal” perception of the scene [6, 11, 34], i.e., segmenting visible pixels of the objects, for which large-scale annotated datasets are available [7, 23, 41]. The lack of labeled data for amodal segmentation is likely due to the difficulty and ambiguity of the annotation task. Amodal annotation of occluded objects requires a human labeler to draw an imagined contour rather than tracing a visible contour in an image, which requires drawing skills that not all annotators possess. In cases where objects are highly occluded there may also be multiple valid hypotheses for a plausible completion. In this work, we propose a variational generative framework for amodal instance completion, called Amodal-VAE. It does not require amodal labels at training time, and exploits instance masks of visible parts of the objects that are widely available in current datasets. Our approach learns to reconstruct full objects from partial masks by training a variational autoencoder in carefully designed 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. stages that allow us to model the complete mask with a low-dimensional latent representation. The probabilistic framework naturally incorporates the ambiguity in the mask completion task, and is able to produce multiple plausible completions which existing work cannot. We showcase our approach on the downstream task of scene editing where the user is presented with interactive tools to complete and erase objects in an image. Experiments demonstrate significant improvements over the recently released state-of-the-art approach [39]. A user study further reveals that participants strongly prefer amodal masks produced by our model over the human-annotated amodal masks. 2 Related Work We focus our review on amodal mask completion which is the primary contribution of our work. The task of amodal instance segmentation aims at segmenting both visible and occluded parts of an object instance. This is in contrast to traditional semantic segmentation [34, 6] or instance segmentation tasks [11, 28, 1, 24], which aim to segment only the visible pixels of an object. Prior work usually decomposes amodal instance segmentation into instance segmentation and amodal mask completion. Supervision is needed for both stages, typically resorting to either synthetic datasets or human-provided labels, which we discuss below. Real Datasets: Recently, human-labeled real datasets have been collected for amodal instance segmentation. Authors extended KITTI [10] to create KINS [27], and COCO [23] to create COCOA [42]. However, there is little available labeled data, in part due to the ambiguity of the labeling task. Synthetic Datasets: One plausible way to get amodal labels is to exploit graphics renderers [14, 43, 17]. In [14], a photo-realistic video dataset is extracted from the GTA-V game along with pixelaccurate masks. In [16], 3D models are aligned with images from PASCAL 3D+ [36] and rendered along with their annotated 3D pose to obtain masks and amodal bounding boxes. In [8], the authors created DYCE by taking snapshots from 3D synthetic scenes [43]. While 3D content provides labels for “free" via rendering, it is not widely available and typically lacks diversity and realism. Simulated Data: A simple way to utilize real data annotated with instance (but not amodal) masks is by simulating occlusion, i.e., by overlaying objects on top of other objects [21, 37, 39]. One problem with this type of approach is that the composited images do not look natural and thus appearance-based models may not generalize well to real images. [37] created OVD (Occluded Vehicle Dataset) by randomly placing pedestrians and vehicles on base images and exploited the Deep Harmonization [35] technique to make synthetic images look natural. Our work, while also relying on occlusion simulation, does so only for object masks, ignoring appearance altogether. Our method can thus exploit either rendered masks, or masks from one dataset for use on another dataset. Methodology: In most prior work, labels, either real or synthetic, are used in a standard supervised framework. In [21], the authors perform amodal segmentation by iteratively expanding the bounding box around an instance mask based on heat intensity. [27] proposes an occlusion classification branch on top of RPN [28]. In addition to the standard mask prediction loss, [37] utilizes a discriminator loss to encourage amodal predictions to look more similar to amodal masks rendered via the Shapenet dataset [5]. Our work also bears similarity to the recent De-occlusion paper [39] due to the application to scene editing. However, the approaches for mask completion differ in methodology, where ours frames the problem probabilistically, while [39] is a deterministic method. Related to our work is [32, 38], where the authors train a VAE [20] to learn a 3D shape prior. This prior is then used to generate closed 3D meshes [32] from partial point cloud observations. Several unsupervised methods have also been proposed for amodal mask completion. Prior works treat amodal completion as a contour completion problem, usually recovered by minimizing shape energy. [18] uses Euler spirals, [30] exploits Contour-Completion Random Fields and [22] utilizes minimum Hamiltonian cycles and Bezier curves. However, most of these unsupervised methods focus on simple shapes and cannot easily be scaled to real world datasets. 3 Background and Problem Formulation In this section, we review Variational Autoencoders (VAEs) and we formally define the problem of amodal instance completion, which we are addressing. 3.1 Variational Autoencoders Given a dataset D = {yi} N i=1, the VAE framework enables us to learn a latent variable generative model p(y, z) = pw1(y|z)p(z), where p(z) is a prior distribution over latent variables and pw1(y|z) is a likelihood distribution, usually interpreted as a decoder and typically parametrized by a neural network with parameters w1 [20, 29]. Since the true posterior distribution p(z|y) is intractable, VAEs employ an auxiliary approximate posterior distribution or encoder qw2(z|y), parametrized by another neural network with parameters w2. When additional information about the data is available, such as the samples’ classes or categories c, the framework can be extended to conditional VAEs, in which the encoder, prior and decoder can be conditioned on this class information [19, 31]. VAEs are trained via variational inference, maximizing the Evidence Lower BOund (ELBO). Here, we consider the case in which only the encoder is conditioned on additional class information c that is available for all samples in the dataset D. The ELBO then is LVAE(w1,w2) = Ey,c∼D [ Ez∼qw2 (z|y,c) [log pw1(y|z)]− λDKL(qw2(z|y, c)‖p(z)) ] (1) When calculating gradients during training, the expectation over the data is estimated using minibatches and the expectation over the latent variables z is usually calculated using a single sample from the approximate posterior. Parameter updates are done with stochastic gradient descent, employing the re-parameterization trick [20, 29]. Due to the KL-regularization, the model learns to encode data y in an efficient low-dimensional latent representation z. Although strict variational inference corresponds to λ = 1, it has been shown that different values of λ allow us to carefully control the balance between the KL and the reconstruction terms [3, 40, 2, 13, 4], which can be beneficial. 3.2 Amodal Instance Completion Let D = {ŷi} N i=1, be a dataset of “partial” instance object masks ŷi ∈ Ŷ in images. We can define an Amodal Mask Completion method as a mapping f : Ŷ → Y with completed masks yi ∈ Y. In words, the amodal instance completion task recovers the occluded part of a particular object from the partially occluded instance mask. If available, we can use additional information in the function f , such as the images’ RGB pixel values or the instances’ classes ci, like in the VAE framework. Note that, formally, the set of realistic complete masks Y is a subset of all possible partial masks Ŷ. 4 Variational Object Completion A trivial solution to the task of Amodal Mask Completion would be collecting a training dataset Dtrain = {yi, ŷi} N i=1 consisting of paired partial masks ŷi and corresponding complete masks yi (and potentially additional information, such as instance classes ci). Then, we could fit a parametric model, i.e. a neural network, to it by treating it as an image segmentation problem. However, annotating an amodal dataset is challenging, time-consuming, expensive and sometimes ambiguous, as objects resulting from occlusions may not even be well-defined. The resulting annotations may vary from individual to individual, which could also make learning more difficult. Instead, we exploit a weakly-supervised approach, where we have access to data with only partially visible masks (Ŷ) and separate data with only full masks (Y). As shown in Figure 2, we are using a VAE framework, in which we first encode partially visible masks ŷ into a smooth latent space and then decode the resulting latent codes z into the full masks y. A crucial advantage of the probabilistic VAE-based framework is that it naturally captures the ambiguity when completing partial masks in its posterior distribution (see Fig 6). Furthermore, it also deals gracefully with inputs that it is uncertain about. Since the model is trained such that all points under the prior distribution map to realistic completed masks, slightly erroneous latent code predictions still decode into well-defined outputs. We denote our model as Amodal-VAE. Next, we present our Amodal-VAE and how we train it in order to overcome the previously discussed challenges in more detail. 4.1 Learning to Reconstruct Full Objects We start by presenting the high-level architecture of Amodal-VAE. For simplicity, we assume a factorial Normal prior distribution p(z) ∼ N (0, I) and factorial Normal approximate posteriors qw2(z|y, c) and q̂w3(z|y, c) with means and standard deviations parametrized via convolutional neural networks that also see the objects’ categories c, which are available in all datasets we are working with or can be predicted if necessary. The decoder pw1(y|z) is a factorial Bernoulli distribution, predicts binary masks, and is parametrized using a deconvolutional neural network (see supplementary material for details). To best leverage the two separate datasets Y with fully visible masks and Ŷ with partially visible masks, we train Amodal-VAE in three stages. (1) Full-Mask-only Training: We want Amodal-VAE to generate only realistic full masks, even when provided with partial masks that are significantly occluded as input. Hence, during the first step we focus on learning the generative component pw1(y|z)p(z) of the model and we train AmodalVAE on full masks only. Amodal-VAE is trained using the ELBO defined in Eq. 1 on Y. It learns low-dimensional representations of complete masks of real objects in its continuous latent space. (2) Simulated Partial-to-Full-Mask Training: After (1), any point in latent space under the prior maps to a realistic completed mask. Now, based on the full mask data, we simulate various occlusions, hence generating a synthetic dataset of paired partial and complete masks of the form Dtrain = {yi, ŷi} N i=1. Freezing the previously learnt decoder, i.e. the decoder pw1(y|z), we then learn a new encoder q̂w3(z|ŷ, c) with parameters w3 that maps partial masks ŷ to points in latent space z that decode into the correct completed masks y. For constructing the synthetic dataset, we sample random instances yforeground and yinstance from Y and mask out yinstance by randomly positioning yforeground in front of it, similar to [39]. We can now maximize the following adapted ELBO objective LAmodal-VAE(w3) = Eŷ,y,c∼Dtrain [ Ez∼q̂w3 (z|ŷ,c) [log pw1(y|z)]− λDKL(q̂w3(z|ŷ, c)‖p(z)) ] (2) where ŷ are the simulated partial masks, y are the full masks, and c is additional object class information. Notice that only the new encoder parameters w3 are optimized and that we do not use the RGB image information. The composition of the new encoder with the frozen decoder forms the amodal instance completion mapping, which we can formally express as f(ŷ, c) = pt w1 (q̂µ w3 (ŷ, c)), where we defined the deterministic functions q̂µ w3 (ŷ, c) as the mean of q̂w3(z|ŷ, c) and p t w1 (z) as the binary output mask calculated from pixelwise Bernoulli probabilities pw1(y|z) with threshold t. Intuitively, the first term in Equation 2 is the reconstruction loss that guides the encoder to find an appropriate position in the low dimensional Gaussian manifold which is decoded to Y. The second term, the KL loss, regularizes the new approximate posterior q̂w3(z|ŷ, c) to generate only encodings that fall under the prior distribution p(z). Because of the first training step and since we keep the decoder frozen, all such encodings z map to complete masks. To aid the new encoder to more easily search the latent space, we exploit an additional latent code distance loss. We pull encodings from complete and corresponding partial masks close to each other, since they both need to decode into the same full masks. We minimize the following loss: LLatentCode(w3) = Eŷ,y,c∼Dtrain [ Eẑ∼q̂w3 (z|ŷ,c),z∼q̂w3 (z|y,c) 1 2 [ẑ − z]2 ] , (3) for paired ŷ and y. We approximate the inner expectation using single samples from the approximate posteriors. We found adding this loss to the ELBO objective to slightly increase performance. However we found that it can’t replace the reconstruction loss. The final loss becomes: L(w3) = LLatentCode(w3) + LAmodal-VAE(w3) (4) (3) Partial-Mask-only Finetuning: In the third training stage, we “finetune” the Amodal-VAE by training its encoder in standard VAE-fashion using only partial masks from Ŷ, masking out all non-visible pixels. Finetuning the Amodal-VAE in this way helps the model to deal with complex realistic occlusions, which may not occur during the occlusion simulation in (2), for example since we only use single foreground instances to create simulated occlusions. The decoder remains frozen. For a partially visible mask ŷ, we define its visible pixels as ŷvis. We can define an ELBO as LFinetuning(w3) = Eŷ,c∼Ŷ [ Ez∼q̂w3 (z|ŷ,c) [log pw1(ŷ vis|z)]− λDKL(q̂w3(z|ŷ, c)‖p(z)) ] (5) where we consider only the reconstruction loss on the visible pixels. In training stages (2) and (3), we additionally apply a spatial transformer network on the output, that learns to resize the completed masks such that they can be pasted back into the scene (see Sec 4.2). Motivation: One may ask, why separate training stages (1) and (2)? When learning the actual amodal completion model in step (2), the approximate posterior sees different partial masks, which can look entirely different due to different simulated occlusions, but that nevertheless map to similar completed masks. Alternatively, similar partial masks may correspond to very different completed masks. Training on such data constitutes a very difficult and ambiguous learning problem, unlike regular VAE training. If the generative component, i.e. the decoder, was also trained like this, it would result in a weaker model encoding less information in latent space. Therefore, we found it to be beneficial to separately train the generative component in robust standard-VAE fashion with full masks only first and then freeze it. After all, we know that we want to generate only ever full masks. In other words, we are separating the difficulty of learning a high quality generative component from the difficulty of learning to map many different partial masks to similar completed masks and vice versa. Note that we also have to train the spatial transformer in step (2). It is easier to first learn the decoder on full masks only and then separately learn the spatial transformer on top of the “correct” decoder, instead of training both simultaneously. 4.2 Resizing Completed Masks with Spatial Transformers Both input and output of Amodal-VAE are tightly cropped 2D instance masks, separately resized or squeezed to the model’s fixed input and output dimensions. Therefore, the output masks are not in the same scale as the partial input masks. Because of that, we cannot simply resize and paste the completed masks back into the image. To overcome this hurdle, we learn an affine transformation that shifts and scales the output mask to correct for the discrepancy. The output mask can then be pasted back into the full image using the resizing and positioning of the partial input mask (see Fig 2). With an instance’s partial mask ŷ and completed mask y, generated by Amodal-VAE’s decoder in the VAE’s fixed output dimensions, we learn a spatial transformation function gθ(y, ŷ) → y ′ such that the transformed y′ is the completed mask in the same scale and at the same position as the input mask ŷ. Specifically, we first predict the transformation parameters (tx, ty, sx, sy) = gθ(y, ŷ) Aθ = [ sx 0 tx 0 sy ty ] (6) where gθ is a neural network and Aθ is a 2D affine transformation matrix that is applied to each pixel in y and used to do differentiable image sampling as defined in [15]. The transformation defined through gθ and Aθ is end-to-end differentiable and can be trained by backpropagation together with the Amodal-VAE. The spatial transformer function, operating on the Amodal-VAE output, is trained during training stages (2) and (3) (in training stage (1) we train on complete masks only). 5 Experiments We now extensively evaluate our Amodal-VAE and show its application to interactive scene editing. Please refer to supplementary material for training and model implementation details. Dataset: We focus on street scenes in this paper. KINS [27] is a large scale dataset derived from KITTI [10], which contains both instance and amodal annotations. The dataset consists of 7,474 images for training and 7,517 images for testing. There are 18,241 and 17,646 complete instances in the training and test set respectively. Following [39], we use the first ≈ 10% images from the test set as validation set (750 images in total). In this paper, we only exploit instance masks in training and amodal ground truth labels are only used for evaluation. The Cityscapes dataset [7] contains 5,000 images of driving scenes, including 2,975 images for training, 500 for validation, and 1,525 for testing. In the training set, 11,251 out of 52,469 instances are without occlusion. The instance masks in Cityscapes are finely annotated for the visible portions of the objects, however, no amodal annotations are available. In this paper, we treat Cityscapes as an additional dataset to test generalization of our approach. 5.1 Amodal Mask Completion Comparisons: We first benchmark our approach for the task of amodal mask completion. To compare with baseline models, we use the amodal completion setting introduced in [39], where at test time RGB images and ground truth (GT) instance masks are provided as input to our model. Since our model does not exploit specific foreground occlusion masks as input, we use the De-occlusion-NOG (no order grounding) setting as a baseline. The performance of our model on KINS is shown in Table 1. Because occluded regions are relatively small compared to full masks, the input instance masks have a high 87.03% mean Intersection over Union (mIOU) with the GT full masks. For this reason, we separately evaluate mIOU on the invisible area only as well. Results show that Amodal-VAE outperforms the state-of-the-art De-occlusion [39] model by 5.66% for invisible mIOU and 0.64% for full mIOU, which is a significant improvement. For another baseline experiment, we generate a synthetic dataset from the KINS training set. Using the full mask data, for each mask we simulate 5 different occlusions by randomly pasting another mask as foreground, hence generating a synthetic dataset of paired partial and complete masks consisting of 91,205 examples. We can now use a nearest neighbor-based approach for mask completion. We Method GT Crop mIOU Invis. mIOU Instance Mask ✗ 87.03 0 Nearest Neighbor Mask ✗ 93.71 54.97 De-occlusion ✗ 94.04 57.19 Amodal-VAE ✗ 94.68 62.85 RGB-Amodal-VAE ✗ 94.53 61.97 Amodal-VAE + GT Box X 97.64 82.30 Table 1: Amodal Completion on KINS. Invisible mIOU means we evaluate mIOU only on invisible areas. GT Crop denotes that input is cropped by GT amodal bounding box. Method Full mIOU Invisible mIOU Amodal-VAE 94.68 62.85 w/o. Full-Mask training 94.28 58.92 w/o. Simulated training 83.30 35.82 w/o. Likelihood training 94.04 57.04 - Latent Space L2 loss 94.02 56.90 w/o. Class Conditioning 93.56 53.03 Table 2: Ablation study of Amodal-VAE on KINS. compute the cosine similarity between an input partial mask and the synthetic partial masks and then use as output the full mask corresponding to the synthetic partial one with the highest similarity to the input. Results show that Amodal-VAE outperforms this baseline (Nearest Neighbor Mask in Table 1). We further ablate the use of the RGB information as additional input to the VAE. After the fullmask-only training stage, we use a ResNet-50 pretrained on ImageNet, which takes cropped RGB images as input, concatenate the ResNet’s features and the mask encoder output, add two further convolutional layers to merge the two, and predict the latent code posterior distribution. The ResNet is finetuned together with all other trainable parameters and we optimize the setup’s hyperparameters and report the best result. As shown in Table 1, line RGB-Amodal-VAE, the additional RGB-based image features do not boost performance. Hence, for our main Amodal-VAE model we discard the RGB input for simplicity. It is possible that a more carefully designed model architecture will be able to extract more useful information from the RGB input as the slight decrease in performance might seem counterintuitive, but we leave this for future research. In the experiments above, we always tightly crop the instance mask. However, in an interactive scene editing tool, users can be asked to provide the amodal box. Thus, we evaluate our method also by utilizing GT amodal bounding boxes, which precisely indicate the extent of the occluded area. In these experiments (Amodal-VAE + GT Box), we achieve 97.64% and 82.30% mIOU, respectively. This suggests that there is much room for improvement by better cropping the input masks automatically. Posterior sampling: To further motivate the use of a probabilistic model, we show quantitative results from multiple posterior predictions. For each partial mask instance, we sample 20 latent codes from the approximate posterior distribution and decode to the corresponding completed masks. We calculate mIOU using masks with the best visible area IOU or best amodal GT IOU. The results in Table 4 show that by sampling we find masks that match the amodal GT significantly better than using the approximate posterior mode. Hence, the approximate posterior incorporates diverse plausible masks, correctly capturing the ambiguity. Using samples from the full posterior distribution may benefit downstream applications. Additional results are provided in the supplementary material: We analyze approximate posterior widths as a function of occlusion ratio and we also show prior samples. Ablations: We first ablate the three training stages described in Sec. 4.1. The results in Table 2 show that the performance drops by 3.93% if we omit the first Full-Mask-only training stage. Furthermore, the model performs significantly worse without the second occlusion-simulation training stage, because this is where the model learns to actually map partial to full masks. Likelihood-based (i.e. using the ELBO) partial-mask-only finetuning as the third stage plays an important role, since it brings real occluded instances into the training loop. Also, conditioning on class information is crucial, as it helps the VAE to better infer the masks, especially when there is a large occlusion. Next, we conduct cross dataset evaluations. We train Amodal-VAE on the Cityscapes training set and evaluate on the KINS test set. Due to the mismatch in class categories across datasets, we merge the bus and car classes into one class, and motorcycle and bicycle classes into another. Results in Table 5 show cross domain stability of our model. We consistently outperform the De-occlusion baseline. Method GT Crop Full mIOU Invis. mIOU Amodal-VAE ✗ 93.72 56.18 De-occlusion ✗ 93.19 48.23 Table 5: Cross Domain Amodal Completion. Models are trained on Cityscapes and tested on KINS. Amodal-VAE GT-box Ground Truth No Preference 46.68 39.50 13.8 Table 6: User Study. We evaluate our model against human-annotated amodal masks in KINS via an Amazon Mechanical Turk user study. Interestingly, subjects prefer our object completions to the human-labeled ones. Qualitative results: We show qualitative results in Figure 3. We also compare to human-annotated masks in Figure 4. Our generated masks contain more details and look more natural than GT masks. We further show shape variations by sampling from the approximate posterior distribution in Figure 6 and Figure 7. Different plausible completions are drawn from a single partial mask. User study: We also evaluate our model against human-annotated amodal masks in KINS via an Amazon Mechanical Turk user study. We assume that the user draws the amodal box which is provided to Amodal-VAE. We randomly sampled 3260 instances from the KINS test set and asked Turkers to indicate preference between Amodal-VAE’s amodal masks and GT annotated amodal masks. Interestingly, as shown in Table 6, users prefer Amodal-VAE’s masks 46.68% of the time versus 39.50% for ground truth. This demonstrates that Amodal-VAE outperforms the drawing skills of the human annotators of the KINS dataset [27]. In the supplementary material, we provide additional results on amodal segmentation, where we first predict modal segmentation masks using a standard segmentation model, and then use Amodal-VAE to complete partial segmentation masks. 5.2 Object Manipulation Application Here, we apply the Amodal-VAE to interactive scene editing and report the results. Background and Instance Inpainting: Since Amodal-VAE can be used to predict complete instance masks for all objects in a scene, we can use these inferred masks to move or delete objects. Such operations will uncover previously occluded parts of the objects and the background. We complete the missing content using an inpainting neural network, which takes RGB images with missing content as input and generates a realistic completed output. Similar to [39], we are using the convolutional inpainting network from [25], which employs partial convolutions and nearest neighbor up-sampling in the decoding stage. Inpainting details are available in the supplementary material. We benchmark the performance of instance inpainting. Since we do not have any ground truth appearance for the invisible areas, we exploit Fréchet Inception Distance [12] (FID Score) to evaluate the inpainting results. FID is a measure of similarity between two datasets of images. It was shown to correlate well with human judgment of visual quality and is most often used to evaluate the quality of samples from Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network [33]. In our case, we use non-occluded instances in the KINS test set as a reference dataset. For each instance, we use Amodal-VAE and the inpainting network to complete the mask and appearance. We compute FID distances between the reference dataset and inpainting results based on predicted amodal masks. Intuitively, the better and more natural the amodal mask is, the lower the FID score should be. Our Amodal-VAE achieves 41.44 versus 50.36 for the baseline De-occlusion approach. Note that the inpainting networks we use for both methods are identical. We thus conclude that the amodal masks predicted by Amodal-VAE lead to more natural completions. Instance Manipulation: Furthermore, we show how we can change the pose of objects, even of those which are partially occluded. Since we are working with complex street scenes, we focus on cars for this demonstration. We exploit GauGAN [26], which can separately take into account local appearance and mask shape. We first infer an object’s complete shape and appearance as described above. Then, we use GauGAN’s encoder to infer its latent representation, which captures only local appearance information. When regenerating the image, we randomly sample complete shapes from the test set and feed them to the SPADE layers. Due to the separation of local appearance and global semantic information in GauGAN, the newly generated scene reflect any pose changes. Qualitative results: We first show the qualitative inpainting results in Table 3. Conditioned on the complete mask, the inpainting network recovers the invisible appearance successfully. We also deleted the foreground mask by another background inpainting module. In Figure 5, we showcase different functionalities in our interactive scene editing tool. Based on the amodal mask, our tool supports swapping order, deleting, moving, and scaling objects. We also showcase how we can change the pose of objects by utilizing GauGAN as described. 6 Conclusions In this work, we propose Amodal-VAE, a simple probabilistic method for amodal instance completion, which does not require amodal labels for training. In particular, our method is based on a variational autoencoder that learns to reconstruct full object masks from partially occluded ones by using a carefully designed training strategy. This exploits both full and partial instances available in existing segmentation datasets. We quantitatively and qualitatively showcase the performance of our method on the downstream task of scene editing of complex street scenes. Our experiments show significant improvement over the recently proposed state-of-the-art method. We provide our method as an interactive image editing tool where users can remove, move, or swap different objects in the image. Note that training Amodal-VAE requires a high quality dataset with complete masks and each category must contain a sufficient number of objects. Therefore, in this work we focus on driving scenes which contain mainly rigid objects and for which sufficient data is available. Applying our model on more complex scenes and in a setting with limited data is left for future work. 7 Broader Impact Our proposed model can be used in a wide range of applications that require reasoning on occluded objects. These include planning tasks in robotics, object tracking, and editing a photo or video. We focus on two significant impacts of using our model. The first is in the context of autonomous driving. An autonomous driving car must infer the geometry and identity of surrounding objects for its decision-making process. Partially visible objects could lead to wrong estimates for motion planning, and thus reasoning about the full extent of objects can lead to much safer control. Our approach infers the complete shapes of the occluded objects for this purpose. The other major impact is on augmented reality. One could use our technology to snap a photograph of their environment, and "delete" existing objects from the photograph, replacing them with alternatives. The crux of our approach is in deleting content from an image, which could be subject to misuse. We encourage work on detecting fakes as the standard technology to deal with image manipulation approaches. Acknowledgments and Disclosure of Funding This work was fully funded by NVIDIA and no third-party funding was used.
1. What is the main contribution of the paper, and how does it relate to previous works? 2. What are the strengths of the proposed approach, particularly in terms of its training process and practical applications? 3. What are the weaknesses of the method, especially regarding the required additional information and limitations in testing multiple plausible completions? 4. How do the results of the user studies demonstrate the advantage of the approach? 5. Are there any potential applications of the method beyond image editing, such as in robotics or computer vision?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes a variational autoencoder for estimating the complete mask of a partially occluded object from its visible mask (segmentation) in an image. Through a three stage training process reminiscent of curriculum learning the method is able to learn mask completion without relying on a paired dataset with human annotated completions. The method is evaluated on objects appearing in typical street scenes (cars, bikes and pedestrians). Strengths + The paper is very well written and easy to follow. The related work and background is well described. + The three stage training process has good motivation (and perhaps can be thought of as a type of curriculum learning). + The results appear to be of practical use (in image editing) and user studies demonstrate the advantage of the approach. Weaknesses - The method requires additional information to what is typically provided in a semantic segmentation dataset in that it needs to know whether masks are complete or incomplete for training. - While the ablation study shows the importance of the unpaired data for finetuning but also highlights the fact that the invisible component of the objects are typically small (i.e., far fewer pixels than the visible part). It would be interesting to see a plot of accuracy versus percentage occlusion. - The multiple plausible completions is never really tested given that the experiments are only run on rigid objects (I'm considering pedestrians here as rigid given their size and viewpoint in the chosen datasets). It would have been interesting to see results on articulated objects (such as animals) or objects where there is a greater variation in possible object shape.
NIPS
Title Variational Amodal Object Completion Abstract In images of complex scenes, objects are often occluding each other which makes perception tasks such as object detection and tracking, or robotic control tasks such as planning, challenging. To facilitate downstream tasks, it is thus important to reason about the full extent of objects, i.e., seeing behind occlusion, typically referred to as amodal instance completion. In this paper, we propose a variational generative framework for amodal completion, referred to as Amodal-VAE, which does not require any amodal labels at training time, as it is able to utilize widely available object instance masks. We showcase our approach on the downstream task of scene editing where the user is presented with interactive tools to complete and erase objects in photographs. Experiments on complex street scenes demonstrate state-of-the-art performance in amodal mask completion, and showcase high quality scene editing results. Interestingly, a user study shows that humans prefer object completions inferred by our model to the human-labeled ones. 1 Introduction One of the most remarkable properties of the human visual system is the ability to rapidly recognize objects and understand their spatial extent in complex visual scenes, even when objects are barely visible due to occlusion [9, 42]. This is important, as it allows humans to more accurately anticipate what can happen a few moments into the future, and plan accordingly. We expect such a capability to also benefit robotic systems. Reasoning about objects and their extent is also key in other contexts, for example, in semantic image editing tasks. Imagine a user that wants to erase an object from a photograph, and possibly even manipulate objects that are partially hidden behind it. To do this, an A.I. system needs to be able to “complete” the occluded objects in the scene, both in their spatial extent, i.e., their masks, as well as in appearance. This problem is typically referred to as amodal instance completion, and is an important component of many applications. However, most research in the domain of semantic segmentation, has focused on the “modal” perception of the scene [6, 11, 34], i.e., segmenting visible pixels of the objects, for which large-scale annotated datasets are available [7, 23, 41]. The lack of labeled data for amodal segmentation is likely due to the difficulty and ambiguity of the annotation task. Amodal annotation of occluded objects requires a human labeler to draw an imagined contour rather than tracing a visible contour in an image, which requires drawing skills that not all annotators possess. In cases where objects are highly occluded there may also be multiple valid hypotheses for a plausible completion. In this work, we propose a variational generative framework for amodal instance completion, called Amodal-VAE. It does not require amodal labels at training time, and exploits instance masks of visible parts of the objects that are widely available in current datasets. Our approach learns to reconstruct full objects from partial masks by training a variational autoencoder in carefully designed 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. stages that allow us to model the complete mask with a low-dimensional latent representation. The probabilistic framework naturally incorporates the ambiguity in the mask completion task, and is able to produce multiple plausible completions which existing work cannot. We showcase our approach on the downstream task of scene editing where the user is presented with interactive tools to complete and erase objects in an image. Experiments demonstrate significant improvements over the recently released state-of-the-art approach [39]. A user study further reveals that participants strongly prefer amodal masks produced by our model over the human-annotated amodal masks. 2 Related Work We focus our review on amodal mask completion which is the primary contribution of our work. The task of amodal instance segmentation aims at segmenting both visible and occluded parts of an object instance. This is in contrast to traditional semantic segmentation [34, 6] or instance segmentation tasks [11, 28, 1, 24], which aim to segment only the visible pixels of an object. Prior work usually decomposes amodal instance segmentation into instance segmentation and amodal mask completion. Supervision is needed for both stages, typically resorting to either synthetic datasets or human-provided labels, which we discuss below. Real Datasets: Recently, human-labeled real datasets have been collected for amodal instance segmentation. Authors extended KITTI [10] to create KINS [27], and COCO [23] to create COCOA [42]. However, there is little available labeled data, in part due to the ambiguity of the labeling task. Synthetic Datasets: One plausible way to get amodal labels is to exploit graphics renderers [14, 43, 17]. In [14], a photo-realistic video dataset is extracted from the GTA-V game along with pixelaccurate masks. In [16], 3D models are aligned with images from PASCAL 3D+ [36] and rendered along with their annotated 3D pose to obtain masks and amodal bounding boxes. In [8], the authors created DYCE by taking snapshots from 3D synthetic scenes [43]. While 3D content provides labels for “free" via rendering, it is not widely available and typically lacks diversity and realism. Simulated Data: A simple way to utilize real data annotated with instance (but not amodal) masks is by simulating occlusion, i.e., by overlaying objects on top of other objects [21, 37, 39]. One problem with this type of approach is that the composited images do not look natural and thus appearance-based models may not generalize well to real images. [37] created OVD (Occluded Vehicle Dataset) by randomly placing pedestrians and vehicles on base images and exploited the Deep Harmonization [35] technique to make synthetic images look natural. Our work, while also relying on occlusion simulation, does so only for object masks, ignoring appearance altogether. Our method can thus exploit either rendered masks, or masks from one dataset for use on another dataset. Methodology: In most prior work, labels, either real or synthetic, are used in a standard supervised framework. In [21], the authors perform amodal segmentation by iteratively expanding the bounding box around an instance mask based on heat intensity. [27] proposes an occlusion classification branch on top of RPN [28]. In addition to the standard mask prediction loss, [37] utilizes a discriminator loss to encourage amodal predictions to look more similar to amodal masks rendered via the Shapenet dataset [5]. Our work also bears similarity to the recent De-occlusion paper [39] due to the application to scene editing. However, the approaches for mask completion differ in methodology, where ours frames the problem probabilistically, while [39] is a deterministic method. Related to our work is [32, 38], where the authors train a VAE [20] to learn a 3D shape prior. This prior is then used to generate closed 3D meshes [32] from partial point cloud observations. Several unsupervised methods have also been proposed for amodal mask completion. Prior works treat amodal completion as a contour completion problem, usually recovered by minimizing shape energy. [18] uses Euler spirals, [30] exploits Contour-Completion Random Fields and [22] utilizes minimum Hamiltonian cycles and Bezier curves. However, most of these unsupervised methods focus on simple shapes and cannot easily be scaled to real world datasets. 3 Background and Problem Formulation In this section, we review Variational Autoencoders (VAEs) and we formally define the problem of amodal instance completion, which we are addressing. 3.1 Variational Autoencoders Given a dataset D = {yi} N i=1, the VAE framework enables us to learn a latent variable generative model p(y, z) = pw1(y|z)p(z), where p(z) is a prior distribution over latent variables and pw1(y|z) is a likelihood distribution, usually interpreted as a decoder and typically parametrized by a neural network with parameters w1 [20, 29]. Since the true posterior distribution p(z|y) is intractable, VAEs employ an auxiliary approximate posterior distribution or encoder qw2(z|y), parametrized by another neural network with parameters w2. When additional information about the data is available, such as the samples’ classes or categories c, the framework can be extended to conditional VAEs, in which the encoder, prior and decoder can be conditioned on this class information [19, 31]. VAEs are trained via variational inference, maximizing the Evidence Lower BOund (ELBO). Here, we consider the case in which only the encoder is conditioned on additional class information c that is available for all samples in the dataset D. The ELBO then is LVAE(w1,w2) = Ey,c∼D [ Ez∼qw2 (z|y,c) [log pw1(y|z)]− λDKL(qw2(z|y, c)‖p(z)) ] (1) When calculating gradients during training, the expectation over the data is estimated using minibatches and the expectation over the latent variables z is usually calculated using a single sample from the approximate posterior. Parameter updates are done with stochastic gradient descent, employing the re-parameterization trick [20, 29]. Due to the KL-regularization, the model learns to encode data y in an efficient low-dimensional latent representation z. Although strict variational inference corresponds to λ = 1, it has been shown that different values of λ allow us to carefully control the balance between the KL and the reconstruction terms [3, 40, 2, 13, 4], which can be beneficial. 3.2 Amodal Instance Completion Let D = {ŷi} N i=1, be a dataset of “partial” instance object masks ŷi ∈ Ŷ in images. We can define an Amodal Mask Completion method as a mapping f : Ŷ → Y with completed masks yi ∈ Y. In words, the amodal instance completion task recovers the occluded part of a particular object from the partially occluded instance mask. If available, we can use additional information in the function f , such as the images’ RGB pixel values or the instances’ classes ci, like in the VAE framework. Note that, formally, the set of realistic complete masks Y is a subset of all possible partial masks Ŷ. 4 Variational Object Completion A trivial solution to the task of Amodal Mask Completion would be collecting a training dataset Dtrain = {yi, ŷi} N i=1 consisting of paired partial masks ŷi and corresponding complete masks yi (and potentially additional information, such as instance classes ci). Then, we could fit a parametric model, i.e. a neural network, to it by treating it as an image segmentation problem. However, annotating an amodal dataset is challenging, time-consuming, expensive and sometimes ambiguous, as objects resulting from occlusions may not even be well-defined. The resulting annotations may vary from individual to individual, which could also make learning more difficult. Instead, we exploit a weakly-supervised approach, where we have access to data with only partially visible masks (Ŷ) and separate data with only full masks (Y). As shown in Figure 2, we are using a VAE framework, in which we first encode partially visible masks ŷ into a smooth latent space and then decode the resulting latent codes z into the full masks y. A crucial advantage of the probabilistic VAE-based framework is that it naturally captures the ambiguity when completing partial masks in its posterior distribution (see Fig 6). Furthermore, it also deals gracefully with inputs that it is uncertain about. Since the model is trained such that all points under the prior distribution map to realistic completed masks, slightly erroneous latent code predictions still decode into well-defined outputs. We denote our model as Amodal-VAE. Next, we present our Amodal-VAE and how we train it in order to overcome the previously discussed challenges in more detail. 4.1 Learning to Reconstruct Full Objects We start by presenting the high-level architecture of Amodal-VAE. For simplicity, we assume a factorial Normal prior distribution p(z) ∼ N (0, I) and factorial Normal approximate posteriors qw2(z|y, c) and q̂w3(z|y, c) with means and standard deviations parametrized via convolutional neural networks that also see the objects’ categories c, which are available in all datasets we are working with or can be predicted if necessary. The decoder pw1(y|z) is a factorial Bernoulli distribution, predicts binary masks, and is parametrized using a deconvolutional neural network (see supplementary material for details). To best leverage the two separate datasets Y with fully visible masks and Ŷ with partially visible masks, we train Amodal-VAE in three stages. (1) Full-Mask-only Training: We want Amodal-VAE to generate only realistic full masks, even when provided with partial masks that are significantly occluded as input. Hence, during the first step we focus on learning the generative component pw1(y|z)p(z) of the model and we train AmodalVAE on full masks only. Amodal-VAE is trained using the ELBO defined in Eq. 1 on Y. It learns low-dimensional representations of complete masks of real objects in its continuous latent space. (2) Simulated Partial-to-Full-Mask Training: After (1), any point in latent space under the prior maps to a realistic completed mask. Now, based on the full mask data, we simulate various occlusions, hence generating a synthetic dataset of paired partial and complete masks of the form Dtrain = {yi, ŷi} N i=1. Freezing the previously learnt decoder, i.e. the decoder pw1(y|z), we then learn a new encoder q̂w3(z|ŷ, c) with parameters w3 that maps partial masks ŷ to points in latent space z that decode into the correct completed masks y. For constructing the synthetic dataset, we sample random instances yforeground and yinstance from Y and mask out yinstance by randomly positioning yforeground in front of it, similar to [39]. We can now maximize the following adapted ELBO objective LAmodal-VAE(w3) = Eŷ,y,c∼Dtrain [ Ez∼q̂w3 (z|ŷ,c) [log pw1(y|z)]− λDKL(q̂w3(z|ŷ, c)‖p(z)) ] (2) where ŷ are the simulated partial masks, y are the full masks, and c is additional object class information. Notice that only the new encoder parameters w3 are optimized and that we do not use the RGB image information. The composition of the new encoder with the frozen decoder forms the amodal instance completion mapping, which we can formally express as f(ŷ, c) = pt w1 (q̂µ w3 (ŷ, c)), where we defined the deterministic functions q̂µ w3 (ŷ, c) as the mean of q̂w3(z|ŷ, c) and p t w1 (z) as the binary output mask calculated from pixelwise Bernoulli probabilities pw1(y|z) with threshold t. Intuitively, the first term in Equation 2 is the reconstruction loss that guides the encoder to find an appropriate position in the low dimensional Gaussian manifold which is decoded to Y. The second term, the KL loss, regularizes the new approximate posterior q̂w3(z|ŷ, c) to generate only encodings that fall under the prior distribution p(z). Because of the first training step and since we keep the decoder frozen, all such encodings z map to complete masks. To aid the new encoder to more easily search the latent space, we exploit an additional latent code distance loss. We pull encodings from complete and corresponding partial masks close to each other, since they both need to decode into the same full masks. We minimize the following loss: LLatentCode(w3) = Eŷ,y,c∼Dtrain [ Eẑ∼q̂w3 (z|ŷ,c),z∼q̂w3 (z|y,c) 1 2 [ẑ − z]2 ] , (3) for paired ŷ and y. We approximate the inner expectation using single samples from the approximate posteriors. We found adding this loss to the ELBO objective to slightly increase performance. However we found that it can’t replace the reconstruction loss. The final loss becomes: L(w3) = LLatentCode(w3) + LAmodal-VAE(w3) (4) (3) Partial-Mask-only Finetuning: In the third training stage, we “finetune” the Amodal-VAE by training its encoder in standard VAE-fashion using only partial masks from Ŷ, masking out all non-visible pixels. Finetuning the Amodal-VAE in this way helps the model to deal with complex realistic occlusions, which may not occur during the occlusion simulation in (2), for example since we only use single foreground instances to create simulated occlusions. The decoder remains frozen. For a partially visible mask ŷ, we define its visible pixels as ŷvis. We can define an ELBO as LFinetuning(w3) = Eŷ,c∼Ŷ [ Ez∼q̂w3 (z|ŷ,c) [log pw1(ŷ vis|z)]− λDKL(q̂w3(z|ŷ, c)‖p(z)) ] (5) where we consider only the reconstruction loss on the visible pixels. In training stages (2) and (3), we additionally apply a spatial transformer network on the output, that learns to resize the completed masks such that they can be pasted back into the scene (see Sec 4.2). Motivation: One may ask, why separate training stages (1) and (2)? When learning the actual amodal completion model in step (2), the approximate posterior sees different partial masks, which can look entirely different due to different simulated occlusions, but that nevertheless map to similar completed masks. Alternatively, similar partial masks may correspond to very different completed masks. Training on such data constitutes a very difficult and ambiguous learning problem, unlike regular VAE training. If the generative component, i.e. the decoder, was also trained like this, it would result in a weaker model encoding less information in latent space. Therefore, we found it to be beneficial to separately train the generative component in robust standard-VAE fashion with full masks only first and then freeze it. After all, we know that we want to generate only ever full masks. In other words, we are separating the difficulty of learning a high quality generative component from the difficulty of learning to map many different partial masks to similar completed masks and vice versa. Note that we also have to train the spatial transformer in step (2). It is easier to first learn the decoder on full masks only and then separately learn the spatial transformer on top of the “correct” decoder, instead of training both simultaneously. 4.2 Resizing Completed Masks with Spatial Transformers Both input and output of Amodal-VAE are tightly cropped 2D instance masks, separately resized or squeezed to the model’s fixed input and output dimensions. Therefore, the output masks are not in the same scale as the partial input masks. Because of that, we cannot simply resize and paste the completed masks back into the image. To overcome this hurdle, we learn an affine transformation that shifts and scales the output mask to correct for the discrepancy. The output mask can then be pasted back into the full image using the resizing and positioning of the partial input mask (see Fig 2). With an instance’s partial mask ŷ and completed mask y, generated by Amodal-VAE’s decoder in the VAE’s fixed output dimensions, we learn a spatial transformation function gθ(y, ŷ) → y ′ such that the transformed y′ is the completed mask in the same scale and at the same position as the input mask ŷ. Specifically, we first predict the transformation parameters (tx, ty, sx, sy) = gθ(y, ŷ) Aθ = [ sx 0 tx 0 sy ty ] (6) where gθ is a neural network and Aθ is a 2D affine transformation matrix that is applied to each pixel in y and used to do differentiable image sampling as defined in [15]. The transformation defined through gθ and Aθ is end-to-end differentiable and can be trained by backpropagation together with the Amodal-VAE. The spatial transformer function, operating on the Amodal-VAE output, is trained during training stages (2) and (3) (in training stage (1) we train on complete masks only). 5 Experiments We now extensively evaluate our Amodal-VAE and show its application to interactive scene editing. Please refer to supplementary material for training and model implementation details. Dataset: We focus on street scenes in this paper. KINS [27] is a large scale dataset derived from KITTI [10], which contains both instance and amodal annotations. The dataset consists of 7,474 images for training and 7,517 images for testing. There are 18,241 and 17,646 complete instances in the training and test set respectively. Following [39], we use the first ≈ 10% images from the test set as validation set (750 images in total). In this paper, we only exploit instance masks in training and amodal ground truth labels are only used for evaluation. The Cityscapes dataset [7] contains 5,000 images of driving scenes, including 2,975 images for training, 500 for validation, and 1,525 for testing. In the training set, 11,251 out of 52,469 instances are without occlusion. The instance masks in Cityscapes are finely annotated for the visible portions of the objects, however, no amodal annotations are available. In this paper, we treat Cityscapes as an additional dataset to test generalization of our approach. 5.1 Amodal Mask Completion Comparisons: We first benchmark our approach for the task of amodal mask completion. To compare with baseline models, we use the amodal completion setting introduced in [39], where at test time RGB images and ground truth (GT) instance masks are provided as input to our model. Since our model does not exploit specific foreground occlusion masks as input, we use the De-occlusion-NOG (no order grounding) setting as a baseline. The performance of our model on KINS is shown in Table 1. Because occluded regions are relatively small compared to full masks, the input instance masks have a high 87.03% mean Intersection over Union (mIOU) with the GT full masks. For this reason, we separately evaluate mIOU on the invisible area only as well. Results show that Amodal-VAE outperforms the state-of-the-art De-occlusion [39] model by 5.66% for invisible mIOU and 0.64% for full mIOU, which is a significant improvement. For another baseline experiment, we generate a synthetic dataset from the KINS training set. Using the full mask data, for each mask we simulate 5 different occlusions by randomly pasting another mask as foreground, hence generating a synthetic dataset of paired partial and complete masks consisting of 91,205 examples. We can now use a nearest neighbor-based approach for mask completion. We Method GT Crop mIOU Invis. mIOU Instance Mask ✗ 87.03 0 Nearest Neighbor Mask ✗ 93.71 54.97 De-occlusion ✗ 94.04 57.19 Amodal-VAE ✗ 94.68 62.85 RGB-Amodal-VAE ✗ 94.53 61.97 Amodal-VAE + GT Box X 97.64 82.30 Table 1: Amodal Completion on KINS. Invisible mIOU means we evaluate mIOU only on invisible areas. GT Crop denotes that input is cropped by GT amodal bounding box. Method Full mIOU Invisible mIOU Amodal-VAE 94.68 62.85 w/o. Full-Mask training 94.28 58.92 w/o. Simulated training 83.30 35.82 w/o. Likelihood training 94.04 57.04 - Latent Space L2 loss 94.02 56.90 w/o. Class Conditioning 93.56 53.03 Table 2: Ablation study of Amodal-VAE on KINS. compute the cosine similarity between an input partial mask and the synthetic partial masks and then use as output the full mask corresponding to the synthetic partial one with the highest similarity to the input. Results show that Amodal-VAE outperforms this baseline (Nearest Neighbor Mask in Table 1). We further ablate the use of the RGB information as additional input to the VAE. After the fullmask-only training stage, we use a ResNet-50 pretrained on ImageNet, which takes cropped RGB images as input, concatenate the ResNet’s features and the mask encoder output, add two further convolutional layers to merge the two, and predict the latent code posterior distribution. The ResNet is finetuned together with all other trainable parameters and we optimize the setup’s hyperparameters and report the best result. As shown in Table 1, line RGB-Amodal-VAE, the additional RGB-based image features do not boost performance. Hence, for our main Amodal-VAE model we discard the RGB input for simplicity. It is possible that a more carefully designed model architecture will be able to extract more useful information from the RGB input as the slight decrease in performance might seem counterintuitive, but we leave this for future research. In the experiments above, we always tightly crop the instance mask. However, in an interactive scene editing tool, users can be asked to provide the amodal box. Thus, we evaluate our method also by utilizing GT amodal bounding boxes, which precisely indicate the extent of the occluded area. In these experiments (Amodal-VAE + GT Box), we achieve 97.64% and 82.30% mIOU, respectively. This suggests that there is much room for improvement by better cropping the input masks automatically. Posterior sampling: To further motivate the use of a probabilistic model, we show quantitative results from multiple posterior predictions. For each partial mask instance, we sample 20 latent codes from the approximate posterior distribution and decode to the corresponding completed masks. We calculate mIOU using masks with the best visible area IOU or best amodal GT IOU. The results in Table 4 show that by sampling we find masks that match the amodal GT significantly better than using the approximate posterior mode. Hence, the approximate posterior incorporates diverse plausible masks, correctly capturing the ambiguity. Using samples from the full posterior distribution may benefit downstream applications. Additional results are provided in the supplementary material: We analyze approximate posterior widths as a function of occlusion ratio and we also show prior samples. Ablations: We first ablate the three training stages described in Sec. 4.1. The results in Table 2 show that the performance drops by 3.93% if we omit the first Full-Mask-only training stage. Furthermore, the model performs significantly worse without the second occlusion-simulation training stage, because this is where the model learns to actually map partial to full masks. Likelihood-based (i.e. using the ELBO) partial-mask-only finetuning as the third stage plays an important role, since it brings real occluded instances into the training loop. Also, conditioning on class information is crucial, as it helps the VAE to better infer the masks, especially when there is a large occlusion. Next, we conduct cross dataset evaluations. We train Amodal-VAE on the Cityscapes training set and evaluate on the KINS test set. Due to the mismatch in class categories across datasets, we merge the bus and car classes into one class, and motorcycle and bicycle classes into another. Results in Table 5 show cross domain stability of our model. We consistently outperform the De-occlusion baseline. Method GT Crop Full mIOU Invis. mIOU Amodal-VAE ✗ 93.72 56.18 De-occlusion ✗ 93.19 48.23 Table 5: Cross Domain Amodal Completion. Models are trained on Cityscapes and tested on KINS. Amodal-VAE GT-box Ground Truth No Preference 46.68 39.50 13.8 Table 6: User Study. We evaluate our model against human-annotated amodal masks in KINS via an Amazon Mechanical Turk user study. Interestingly, subjects prefer our object completions to the human-labeled ones. Qualitative results: We show qualitative results in Figure 3. We also compare to human-annotated masks in Figure 4. Our generated masks contain more details and look more natural than GT masks. We further show shape variations by sampling from the approximate posterior distribution in Figure 6 and Figure 7. Different plausible completions are drawn from a single partial mask. User study: We also evaluate our model against human-annotated amodal masks in KINS via an Amazon Mechanical Turk user study. We assume that the user draws the amodal box which is provided to Amodal-VAE. We randomly sampled 3260 instances from the KINS test set and asked Turkers to indicate preference between Amodal-VAE’s amodal masks and GT annotated amodal masks. Interestingly, as shown in Table 6, users prefer Amodal-VAE’s masks 46.68% of the time versus 39.50% for ground truth. This demonstrates that Amodal-VAE outperforms the drawing skills of the human annotators of the KINS dataset [27]. In the supplementary material, we provide additional results on amodal segmentation, where we first predict modal segmentation masks using a standard segmentation model, and then use Amodal-VAE to complete partial segmentation masks. 5.2 Object Manipulation Application Here, we apply the Amodal-VAE to interactive scene editing and report the results. Background and Instance Inpainting: Since Amodal-VAE can be used to predict complete instance masks for all objects in a scene, we can use these inferred masks to move or delete objects. Such operations will uncover previously occluded parts of the objects and the background. We complete the missing content using an inpainting neural network, which takes RGB images with missing content as input and generates a realistic completed output. Similar to [39], we are using the convolutional inpainting network from [25], which employs partial convolutions and nearest neighbor up-sampling in the decoding stage. Inpainting details are available in the supplementary material. We benchmark the performance of instance inpainting. Since we do not have any ground truth appearance for the invisible areas, we exploit Fréchet Inception Distance [12] (FID Score) to evaluate the inpainting results. FID is a measure of similarity between two datasets of images. It was shown to correlate well with human judgment of visual quality and is most often used to evaluate the quality of samples from Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network [33]. In our case, we use non-occluded instances in the KINS test set as a reference dataset. For each instance, we use Amodal-VAE and the inpainting network to complete the mask and appearance. We compute FID distances between the reference dataset and inpainting results based on predicted amodal masks. Intuitively, the better and more natural the amodal mask is, the lower the FID score should be. Our Amodal-VAE achieves 41.44 versus 50.36 for the baseline De-occlusion approach. Note that the inpainting networks we use for both methods are identical. We thus conclude that the amodal masks predicted by Amodal-VAE lead to more natural completions. Instance Manipulation: Furthermore, we show how we can change the pose of objects, even of those which are partially occluded. Since we are working with complex street scenes, we focus on cars for this demonstration. We exploit GauGAN [26], which can separately take into account local appearance and mask shape. We first infer an object’s complete shape and appearance as described above. Then, we use GauGAN’s encoder to infer its latent representation, which captures only local appearance information. When regenerating the image, we randomly sample complete shapes from the test set and feed them to the SPADE layers. Due to the separation of local appearance and global semantic information in GauGAN, the newly generated scene reflect any pose changes. Qualitative results: We first show the qualitative inpainting results in Table 3. Conditioned on the complete mask, the inpainting network recovers the invisible appearance successfully. We also deleted the foreground mask by another background inpainting module. In Figure 5, we showcase different functionalities in our interactive scene editing tool. Based on the amodal mask, our tool supports swapping order, deleting, moving, and scaling objects. We also showcase how we can change the pose of objects by utilizing GauGAN as described. 6 Conclusions In this work, we propose Amodal-VAE, a simple probabilistic method for amodal instance completion, which does not require amodal labels for training. In particular, our method is based on a variational autoencoder that learns to reconstruct full object masks from partially occluded ones by using a carefully designed training strategy. This exploits both full and partial instances available in existing segmentation datasets. We quantitatively and qualitatively showcase the performance of our method on the downstream task of scene editing of complex street scenes. Our experiments show significant improvement over the recently proposed state-of-the-art method. We provide our method as an interactive image editing tool where users can remove, move, or swap different objects in the image. Note that training Amodal-VAE requires a high quality dataset with complete masks and each category must contain a sufficient number of objects. Therefore, in this work we focus on driving scenes which contain mainly rigid objects and for which sufficient data is available. Applying our model on more complex scenes and in a setting with limited data is left for future work. 7 Broader Impact Our proposed model can be used in a wide range of applications that require reasoning on occluded objects. These include planning tasks in robotics, object tracking, and editing a photo or video. We focus on two significant impacts of using our model. The first is in the context of autonomous driving. An autonomous driving car must infer the geometry and identity of surrounding objects for its decision-making process. Partially visible objects could lead to wrong estimates for motion planning, and thus reasoning about the full extent of objects can lead to much safer control. Our approach infers the complete shapes of the occluded objects for this purpose. The other major impact is on augmented reality. One could use our technology to snap a photograph of their environment, and "delete" existing objects from the photograph, replacing them with alternatives. The crux of our approach is in deleting content from an image, which could be subject to misuse. We encourage work on detecting fakes as the standard technology to deal with image manipulation approaches. Acknowledgments and Disclosure of Funding This work was fully funded by NVIDIA and no third-party funding was used.
1. What is the main contribution of the paper regarding amodal completion? 2. What are the strengths of the proposed method, particularly in its simplicity and effectiveness? 3. What are the weaknesses of the paper, especially regarding its mask completion results and limitations in certain scenarios? 4. How does the reviewer suggest improving the method, such as incorporating RGB pixels or comparing with other methods that use RGB inputs or 3D cues? 5. What is the significance of the rebuttal experiment and its impact on the paper's claims?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper tackles two problems in amodal completion: the difficulty and ambiguity of occlusion annotation. It proposes a VAE framework that trains on unpaired full masks and occluded masks in three stages, so that encodings of full and occluded masks are learned and aligned, and decoding of full masks from the aligned latent space is also learned. The method is used for mask completion, inpainting, and scene manipulation. Strengths Strengths: - The method is simple yet effective, relieving supervision demand for mask completion (not sure for the first time or not). - The three-stage VAE training is interesting and intuitive, and hopefully is able to inspire methods in other tasks. - Mask completion results look good, and hand in hand with other tools, the method can also be applied to inpainting and multiple scene manipulation tasks. Weaknesses Weaknesses: - Some masks do not look reasonable when visual cues are considered, for example, the top left mask of Figure 7, which nevertheless might make sense without the image shown. Despite the paper saying "due to the nature of our Amodal-VAE, we discard RGB pixels...", I wonder if the VAE is also able to condition on the instance appearance somehow and if it helps. * Rebuttal experiment shows adding RGB cannot help. It didn't surprise me much, as it makes training input more "noisy" and training more easily overfitting (to some RGB features). Humans can leverage RGB in sort of a reasoning way, i.e. when the mask can have two explanations, use RGB to match the two hypothesis via some mental simulation, and decide. This can be too hard for neural networks trained for one task. So I think this part helps justify the methodology. - Empirically, full masks and occluded masks of cars follow strong patterns in datasets like KITTI. In other domains, the lack of visual cues might hurt more. More results from different instance classes, or more comparisons with methods that take RGB inputs [1,2,3] or even 3D cues [4], would be valuable to address my concerns. * Rebuttal: I can also understand the object category is limited by dataset availability. Glad if author "show quantitive results on more classes and non-rigid objects for the camera-ready version" as claimed. - For the scene manipulation part, the pipelined approach does not integrate visual or 3D cues in the Amodal-VAE part, and GauGAN seems to lift a heavier job. Also, no comparison or quantitive results are shown. To this end, [5] provides a scene manipulation benchmark and a baseline to compare with (via user study or metrics like FID). [1] Amodal instance segmentation. ECCV 2016. [2] Amodal instance segmentation with kins dataset. CVPR 2019. [3] Visualizing the invisible: occluded vehicle segmentation and recovery. CVPR 2019. [4] Learning 3d shape completion under weak supervision. IJCV 2018. [5] 3D-aware scene manipulation via inverse graphics. NeurIPS 2018.
NIPS
Title Adversarial Blocking Bandits Abstract We consider a general adversarial multi-armed blocking bandit setting where each played arm can be blocked (unavailable) for some time periods and the reward per arm is given at each time period adversarially without obeying any distribution. The setting models scenarios of allocating scarce limited supplies (e.g., arms) where the supplies replenish and can be reused only after certain time periods. We first show that, in the optimization setting, when the blocking durations and rewards are known in advance, finding an optimal policy (e.g., determining which arm per round) that maximises the cumulative reward is strongly NP-hard, eliminating the possibility of a fully polynomial-time approximation scheme (FPTAS) for the problem unless P = NP. To complement our result, we show that a greedy algorithm that plays the best available arm at each round provides an approximation guarantee that depends on the blocking durations and the path variance of the rewards. In the bandit setting, when the blocking durations and rewards are not known, we design two algorithms, RGA and RGA-META, for the case of bounded duration an path variation. In particular, when the variation budget BT is known in advance, RGA can achieve O( √ T (2D̃ +K)BT ) dynamic approximate regret. On the other hand, when BT is not known, we show that the dynamic approximate regret of RGA-META is at most O((K + D̃)B̃T ) where B̃ is the maximal path variation budget within each batch of RGA-META (which is provably in order of o( √ T ). We also prove that if either the variation budget or the maximal blocking duration is unbounded, the approximate regret will be at least Θ(T ). We also show that the regret upper bound of RGA is tight if the blocking durations are bounded above by an order of O(1). N/A RGA can achieve O( √ T (2D̃ +K)BT ) dynamic approximate regret. On the other hand, when BT is not known, we show that the dynamic approximate regret of RGA-META is at most O((K + D̃)1/4B̃1/2T 3/4) where B̃ is the maximal path variation budget within each batch of RGA-META (which is provably in order of o( √ T ). We also prove that if either the variation budget or the maximal blocking duration is unbounded, the approximate regret will be at least Θ(T ). We also show that the regret upper bound of RGA is tight if the blocking durations are bounded above by an order of O(1). 1 Introduction This paper investigates the blocking bandit model where pulling an arm results in having that arm blocked for a deterministic number of rounds. For example, consider the classical problem of online task allocation, in which new task requests arrive at each time step, waiting to be assigned to one of many servers [Karthik et al., 2017]. Once a server is allocated to a task, it starts working on it, and becomes unavailable for future tasks until that task is done. If there are no servers available or none is allocated to the task at its arrival, the request will not be served and leave the system forever. A more recent example comes from the domain of expert crowdsourcing (e.g., Upwork, Outsourcely, etc.). In this setting, a job requester can sequentially choose from a pool of workers and allocate a short-term job/project to the worker [Ho and Vaughan, 2012, Tran-Thanh et al., 2014]. The stochastic version of this problem, where the rewards are randomly drawn from a distribution in an 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. i.i.d. manner, with the constraint that the blocking durations are fixed per arm over time, has been studied in [Basu et al., 2019] and [Basu et al., 2020]. However, in many applications, the stochastic setting is too restrictive and not realistic. For example, in the online task allocation problem, the tasks can be heterogeneous, and both the value and the serving time of the tasks can vary over time in an arbitrary manner. Furthermore, in the expert crowdsourcing setting, the time and quality workers need to deliver the job are unknown in advance, can vary over time, and do not necessarily follow an i.i.d. stochastic process. These examples demonstrate that for many real-world situations, the stochastic blocking bandit model is not an appropriate choice. To overcome this issue, in this paper we propose the adversarial blocking bandit setting, where both the sequence of rewards and blocking durations per arm can be arbitrary. While the literature of adversarial bandits is enormous, to the best of our knowledge, this is the first attempt to address the effect of blocking in adversarial models. In particular, we are interested in a setting where the rewards are neither sampled i.i.d., nor maliciously chosen in an arbitrary way. Instead, in many real-world systems, the change in the value of rewards is rather slow or smooth over time (e.g., in the online task allocation problem, similar tasks usually arrive in batch, or in the crowdsourcing system, workers may have periods when they perform consistently, and thus, their performance slowly varies over time). To capture this, we assume that there is a path variation budget which controls the change of the rewards over time 1. 1.1 Main Contributions In this paper, apart from the adversarial blocking bandit setting, we also investigate two additional versions of the model: (i) The offline MAXREWARD problem, where all the rewards and blocking durations are known in advance; and (ii) the online version of MAXREWARD, in which we see the corresponding rewards and blocking durations of the arms at each time step before we choose an arm to pull. Our main findings can be summarised as follows: 1. We prove that the offline MAXREWARD problem is strongly NP-hard (Theorem 1). Note that this result is stronger than the computational hardness result in Basu et al. [2019], which depends on the correctness of the randomised exponential time hypothesis. 2. We devise a provable approximation ratio for a simple online greedy algorithm, Greedy-BAA, for the online MAXREWARD problem (Theorem 2). Our approximation ratio, when applied to the stochastic blocking bandit model with fixed blocking durations, is slightly weaker than that of Basu et al. [2019]. However, it is more generic, as it can be applied to any arbitrary sequence of rewards and blocking durations. 3. For the bandit setting, we consider the case when both the maximal blocking duration and the path variance are bounded, and propose two bandit algorithms: • We design RGA for the case of known path variation budget BT . In particular, we show that RGA can provably achieve O (√ T (2D̃ +K)BT ) regret, where T is the time horizon, K is the number of arms, D̃ is the maximum blocking duration, and the regret is computed against the performance of Greedy-BAA (Theorem 3). • For the case of unknown path variation budget BT , we propose RGA-META that uses Exp3 as a meta-bandit algorithm to learn an appropriate path variation budget and runs RGA with it. We prove that RGA-META achieves O((K + D̃)1/4B̃1/2T 3/4) regret bound where B̃ is the maximal path variance within a single batch of the algorithm, and is in order of O( √ T ) in the worst case (Theorem 4). 4. Finally, we also discuss a number of regret lower bound results. In particular, we show that if either BT or D̃ is in Θ(T ) (or unbounded), then the regret is at least Θ(T ) (Claims 1 and 2). We also discuss that if D̃ ∈ O(1), then there is a matching lower bound for the regret of RGA (Section 5). 1We will show in Section 5 that bounded variation budgets are necessary to achieve sub-linear regrets. 1.2 Related Work Stochastic Blocking Bandits. The most relevant work to our setting is the stochastic blocking bandit model. As mentioned before, Basu et al. [2019] introduce and study this model where the reward per each time period is generated from a stochastic distribution with mean µk reward for each arm k and the blocking duration is fixed across all time period for each arm k (e.g., Dkt = D k for all t and k). In the optimization setting where the mean rewards and blocking durations are known, they consider a simpler version of the MAXREWARD problem for their setting and show that the problem is as hard as the PINWHEEL Scheduling on dense instances [Jacobs and Longo, 2014] and provide that a simple greedy algorithm (see Algorithm 1) achieves an approximation ratio of (1− 1/e−O(1/T )) where T is total time period. In the bandit setting, they provide lower and upper regret bounds that depend on the number of arms, mean rewards, and log(T ). A very recent work [Basu et al., 2020] extends the stochastic blocking bandit to a contextual setting where a context is sampled according to a distribution each time period and the reward per arm is drawn from a distribution with the mean depending on the pulled arm and the given context. Similar to the work of Basu et al. [2019], Basu et al. [2020] derive an online algorithm with an approximation ratio that depends on the maximum blocking durations and provide upper and lower α-regret bounds of O(log T ) and Ω(log T ), respectively. However, the results from this models cannot be directly applied to the adversarial setting due to the differences between the stochastic and adversarial reward generation schemes. Budgeted and Knapsack Bandits. Since the underlying offline optimisation problem of our setting, MAXREWARD, can also be casted as an instance of the multiple-choice multidimensional knapsack problem, it is also worth mentioning the line of work in the bandit literature that solve online knapsack problems with bandit feedback. In these models, the pull of an arm requires the consumption of resources in d ≥ 1 dimensions. The resource per arm is given either stochastic or adversarially in each time period and a (non replenishable) total budget B = (B1, ..., Bd) is available at the initial time period. The one-dimensional stochastic version of this setting is first studied in Tran-Thanh et al. [2010, 2012], Ding et al. [2013] under the name budgeted bandits, and is later extended to multiple dimensions (a.k.a. bandits with knapsack) by Badanidiyuru et al. [2013], Agrawal and Devanur [2014], Badanidiyuru et al. [2014]. More recently, Rangi et al. [2019] and Immorlica et al. [2019] initiate the study of adversarial knapsack bandits. Rangi et al. [2019] consider the d = 1 setting with a regret benchmark that is measured based on the best fixed-arm’s reward to cost ratio. Under such a regret benchmark, they show that sub-linear regret (with respect to B and k) is possible in both the stochastic and adversarial settings. Immorlica et al. [2019] consider the d ≥ 1 setting with a regret benchmark that is defined to be the ratio of the expected reward of the best fixed distribution over arms and the policy’s expected reward. show that the ratio is at least Ω(log T ). However, none of the techniques developed in these work can be applied to our setting, due to the following reason: The results in the knapsack bandit models typically assume that the pulling costs are bounded above by a constant, and the budget is significantly larger than this constant to allow sufficient exploration. In contrast, when MAXREWARD is conversed into a knapsack model, many of its dimensions will have a budget of 1, and the corresponding pulling cost for that dimension is also 1 (due to the blocking condition). Other Settings with Arm Availability Constraints. Other bandit models with arm availability cosntrainsts include the mortal bandits [Chakrabarti et al., 2009], sleeping bandits [Kleinberg et al., 2010, Kale et al., 2016], bandits with stochastic action sets [Neu and Valko, 2014], and combinatorial semi-bandits [Neu and Bartók, 2016]. We refer readers to [Basu et al., 2019] for a discussion of these models, including the relevance of the blocking bandit setting to online Markov decision processes. Connection to the scheduling literature. Notice that there is a strong connection between MAXREWARD and the interval scheduling problems. In particular, the MAXREWARD problem belongs to the class of fixed interval scheduling problems with arbitrary weight values, no preemption, and machine dependent processing time (see e.g., Kolen et al. [2007] for a comprehensive survey). This is one of the most general, and thus, hardest versions of the fixed interval scheduling literature (see, e.g., Kovalyov et al. [2007] for more details). In particular, MAXREWARD is a special case of this setting where for each task, the starting point of the feasible processing interval is equal to the arrival time. Note that to date, provable performance guarantees for fixed interval scheduling problems with arbitrary weight values only exist in offline, online but preemptive, or settings with some special uniformity assumptions (e.g., [Erlebach and Spieksma, 2000, Miyazawa and Erlebach, 2004, Bender et al., 2017, Yu and Jacobson, 2020]). Therefore, to our best knowledge, Theorem 2 in our paper is the first result which provides provable approximation ratio for a deterministic algorithm in an online non-preemptive setting. Note that with some modifications, our proof can also be extended to the general online non-preemptive setting, i.e., online interval scheduling with arbitrary weight values, no preemption, and machine dependent processing time. 2 Preliminaries Adversarial blocking bandit. In this paper we consider the following bandit setting. Let K = {1, . . . ,K} be the set of K arms. Let T = {1, . . . , T} denote a sequence of T time steps, or decision points faced by a decision maker. At every time step t ∈ T , the decision maker may pull one of the K arms. When pulling an arm k ∈ K at time step t ∈ T , the reward Xkt ∈ [0, 1] is obtained. In addition, the pulled arm k is deterministically blocked and cannot be pulled for the next (Dkt − 1) time steps for some integer blocking duration Dkt ∈ Z+. We also use the notation ∅ to denote the action of not pulling an arm. In which case, X∅t = 0 and D ∅ t = 1 for each time step t. We denote by Xk the sequence of rewards over T time steps associated with an arm k ∈ K such that Xk = {Xkt }Tt=1. In addition, we denote by X the sequence of vectors of all K rewards such that X = {Xk}Kk=1. Similarly, we denote by Dk = {Dkt }Tt=1 the sequence of blocking durations over T time steps associated with an arm k and denote by D = {Dk}Kk=1 the sequence of vectors of all K blocking duration vectors. In our model, the rewards and blocking durations of each arm can change an arbitrary number of times. We let D̃ ( ˜ D) be the maximal blocking duration (minimal blocking duration) which is the upper bound (lower bound) of the largest (smallest) possible blocking duration. We denote by D = {1, . . . , D̃}K×T the set of all blocking duration vector sequences which are upper bounded by D̃. Note that D is defined with respect to minimal blocking duration ˜ D = 1. It is sometime be useful to define D for arbitrarily lower bound ˜ D. Bounded path variation. Motivated by and adapted from a recent line of work in the bandit literature (e.g., [Besbes et al., 2014]), we assume that there is a path variation budget on the sequence of the rewards. In particular, the definition of path variation on the sequence of the rewards is defined to be T−1∑ t=1 K∑ k=1 ∣∣Xkt+1 −Xkt ∣∣ . We refer toBT as the path variation budget over T . We define the corresponding temporal uncertainty set as the set of reward vector sequences which satisfy the variation budget over the set of time steps {1, . . . , T}: B = { X ∈ [0, 1]K×T : T−1∑ t=1 K∑ k=1 ∣∣Xkt −Xkt+1∣∣ ≤ BT } Note that by setting BT = KT we can recover the standard unbounded version of our bandit model (as all the rewards are from [0, 1]). Note that our analysis also works for other variation budgets such as the maximum variation [Besbes et al., 2014] or the number of changes budgets [Auer et al., 2019]. See Section 5 for a more detailed discussion. Arm pulling policy. Let U be a random variable defined over a probability space (U,U ,Pu) Let π1 : U→ K and πt : [0, 1]t−1 × {1, . . . , D̃}t−1 × U→ K for t = 2, 3, . . . be measurable functions With some abuse of notation we denote by πt ∈ K the arm chosen at time t, that is given by πt = { π1(U) t = 1 πt(X π t−1, . . . , X π 1 , D π t−1, . . . , D π 1 , U) t = 2, 3, . . . Here Xπt (resp. D π t ) denotes the reward (resp. blocking duration) observed by the policy π at time t The mappings {πt : t = 1, . . . , T} together with the distribution Pu define the class of policies We define the class P of admissible policies to be those, at every time step, which choose an action which is not blocked. That is, P = { (π1, . . . , πT ) : πt /∈ {πj : j +D πj j − 1 ≥ t, ∀j ≤ t− 1}, ∀t ∈ {1, . . . , T}, X ∈ B, D ∈ D } . Algorithm 1: Greedy-BAA Input : T , K, {Xkt }k∈K,t∈T , {Dkt }k∈K,t∈T - An instance of the MAXREWARD Problem Output : π+ = (π+1 , π + 2 , ..., π + T ) ∈ P - A greedy solution to the MAXREWARD Problem 1 π+ = (∅, ..., ∅); 2 for j ← 1 to T do 3 Select π+j ∈ arg maxkj∈Aj(π+1 ,...,π+j−1)∪∅X kj j # See the preliminary section for definitions 4 end 5 return π+ In addition, let At(π1, . . . , πt−1) = K \ {πj : j + D πj j − 1 ≥ t, ∀j ≤ t − 1} denote the set of available arms at time step t (we will also use At for the sake of brevity). Objective. The cumulative reward of a policy π ∈ P is defined to be r(π) = ∑T t=1X π t where Xπt is the reward obtained by policy π at time step t. Our objective is to find π ∗ ∈ P such that π∗ ∈ arg maxπ∈P Eπ[r(π)], where the expectation is over all possible randomisation coming from policy π. Feedback. The difficulty of the optimisation problem depends on the information (or the feedback) we have about the rewards and blocking durations of the arms. In this paper, we consider three feedback models in increasing order of difficulty. In the simplest setting, we know the value of all Xkt and Dkt in advance. We refer to this setting as the (offline) MAXREWARD optimization problem. In the online version of MAXREWARD, we assume that Xkt and D k t are not known in advance, but at each time step t, the value of Xkt and D k t for all k at that particular time step t is revealed before we choose any arm to pull. Finally, in the (classical) bandit setting, we assume that only the reward and blocking duration of the chosen arms are revealed after that arm is pulled2. We will refer to third model as the adversarial blocking bandit problem. 3 The Offline and Online MAXREWARD Problems We start with the analysis of the offline and online MAXREWARD problems. As a slight preview of the next subsections, computing an optimal solution of the offline MAXREWARD problem is strongly NP-hard even with bounded variation budget. Such result eliminates the possibility of a fully polynomial-time approximation scheme (FPTAS) for the problem unless P = NP. In addition, for the online MAXREWARD problem, we design an online greedy algorithm with provable approximation guarantee. 3.1 The Computational Complexity of the Offline MAXREWARD Problem To show that the MAXREWARD problem is strongly NP-hard, we reduce from the Boolean satisfiability problem with three literals per clause (3-SAT), which is known to be strongly NP-complete [Garey and Johnson, 1979]. In a 3-SAT instance, we are given m variables and n clauses. Each clause consists of three literals, and each literal is either a variable or the negation of the variable. The problem is to determine if there is a boolean true/false assignment to each variable so that the given 3-SAT instance is true (i.e., each clause contains at least one true literal). Theorem 1. Computing an optimal solution for the MAXREWARD problem is strongly NP-hard. The hardness result holds even when the path variation is bounded. 3.2 Online MAXREWARD Problem with Bounded Variation Budget In this section, we consider the online version of MAXREWARD. We devise a simple online greedy algorithm, Greedy Best Available Arm (Greedy-BAA), in which, at each time step, the algorithm plays an available arm with the highest reward. Algorithm 1 provides a detail description of Greedy-BAA. 2In this paper, due to space limits, we do not deal with the full information feedback model, in which the reward and blocking duration values of all the arms are revealed at each time step after the pull. Below, we show that Greedy-BAA provides an approximation guarantee to the offline MAXREWARD problem that depends on the blocking durations and the variation budget. Theorem 2. Let k∗ = arg maxk Dkmax Dkmin denote the arm with the highest max-min blocking duration ratio. Let π+ denote the solution returned by Greedy-BAA, and π∗ denote an optimal solution of the offline MAXREWARD problem, respectively. We state that:( 1 + Dk ∗ max Dk ∗ min ) r(π+) + Dk ∗ max Dk ∗ min BT ≥ r(π∗), That is, Greedy-BAA has an approximation ratio of ( 1 + Dk ∗ max Dk ∗ min )−1 ( 1− D k∗ maxBT Dk ∗ minr(π ∗) ) . Note that as Dk ∗ min ≥ ˜ D and Dk ∗ max ≤ D̃, the approximation ratio above can be further bounded above by ( 1 + D̃ ˜ D )−1 ( 1− D̃BT ˜ Dr(π∗) ) . Comparison to the result of Basu et al. [2019]. We note that Basu et al. [2019] has studied the MAXREWARD problem with path variation budget BT = 0 (i.e., the reward values are fixed over time) and homogeneous blocking durations per arm (i.e., when the blocking duration per arm do not change over time). In that case, our proof provides an approximation ratio of 1/2 whereas Basu et al. [2019] provides an approximation ratio of O(1− 1/e−O(1/T )). Their technique uses a much complicated LP-bounding technique/proof that does not directly generalize to the case of BT > 0 with varying blocking durations. On the other hand, our approximation ratio result holds for the general case. For example, if BT grows slower than r(π+) with T , our algorithm guarantees an approximation ratio of (1 + 2 D̃ ˜ D ) −1. 4 The Adversarial Blocking Bandit Problem Given the investigation of the (offline and online) MAXREWARD problems in the previous section, we now turn to the main focus of our paper, namely the online MAXREWARD problem with bandit feedback, a.k.a the adversarial blocking bandit problem. While the regret analyses are typically done by benchmarking against the best fixed policy in hindsight, we can easily show that in our setting, this benchmark would perform arbitrarily poorly, compared to the offline optimal solution. Therefore, instead of following the standard regret analysis, we are interested in comparing the performance of the designed algorithms to that of the offline optimal solution. Therefore, we will use the following regret definition: Dynamic approximate regret. We compare the performance of a policy with respect to the dynamic oracle algorithm that returns the offline optimal solution of MAXREWARD .We define the α-regret under a policy π ∈ P as the worst case difference between an (offline) α-optimal sequence of actions and the expected performance under policy π. More precisely, let π∗ denote the arm pulling policy of that dynamic oracle algorithm. The α-regret of a policy π ∈ P against π∗ is defined to be Rαπ(BT , D̃, T ) = αr(π∗)− E[r(π)] where the expectation is over all the possible randomisation of π. Note that this regret notion is stronger than the regret against the best fixed policy in hindsight, as it is easy to show that the best fixed policy can perform arbitrarily badly, compared to π∗. 4.1 Blocking Bandit with Known Path Variation Budget We now turn to describe our new bandit algorithm, RGA, designed for the adversarial blocking bandit problem. This algorithm can be described as follows: 1. We split the time horizon T into batches T1, . . . , Tm of size ∆T each (except possibly the last batch): Tj = {t ∈ {1, . . . ,∆T } : (j − 1)∆T + t ≤ min {j∆T , T}} , for all j = 1, . . . ,m where m = ⌈ T ∆T ⌉ is the number of batches. Algorithm 2: Repeating Greedy Algorithm (RGA) Input: ∆T . 1 while 1 ≤ j ≤ ⌈ T ∆T ⌉ do 2 Set τ = 1 3 while τ ≤ ∆T do 4 if (1 ≤ τ ≤ K) then 5 Pull arm k = τ mod K + 1 6 Receive reward and blocking duration (Xkτ , D k τ ) 7 Set X̂kt = X k τ for all t ∈ [1,∆T ]. 8 if (K + 1 ≤ τ ≤ D̃ +K) then 9 Pull no arms 10 if (D̃ +K + 1 ≤ τ ≤ ∆T − D̃) then 11 Pick arms according to GREEDY-BAA(∆T − 2D̃ −K,K, X̂1, . . . , X̂K , D1, . . . , DK) 12 if (∆T − D̃ + 1 ≤ τ ≤ ∆T ) then 13 Pull no arms 14 τ ← τ + 1 15 j ← j + 1 2. Within each batch we spend the first K rounds pulling each arm. Without loss of generality, we shall assume that arm k is pulled on round k. After this we spend the next D̃ rounds pulling no arms. This ensures that all arms will be available when we next pull an arm. 3. Then, up until the final D̃ rounds we play Greedy-BAA using the rewards observed in the first K rounds as the fixed rewards for each arm. 4. In the final D̃ rounds of each batch, we again pull no arms. This ensures that all of the arms are available at the beginning of the next batch. Theorem 3. Suppose that the variation budgetBT is known in advance and maximal duration D̃ ≥ 1 such that D̃BT ∈ o(T ). The α-regret of RGA, where α = ˜DD̃+ ˜ D , is at most O (√ T (2D̃ +K)BT ) when the parameter when ∆T is set to ⌈√ (T+1)(2D̃+K) 2BT ⌉ . Note that this bound is sub-linear in T if D̃BT = o(T ) (e.g., D̃ is bounded above by a constant and BT ∈ o(T )). It is also worth noting that while α = 11+D̃ might imply that RGA can perform better than the worst-case performance of Greedy-BAA, with BT ∈ o(T ) it is not the case (see Section ?? in the appendix for more details). 4.2 Blocking Bandit with Unknown Path Variation Budget Note that RGA requires knowledge of BT in order to properly set ∆T . To resolve this issue we propose META-RGA, a meta-bandit algorithm, where each arm corresponds to an instance of the RGA algorithm whose ∆T parameter tuned for a different variation budget. The time horizon T is broken into meta-blocks of length H . At the start of each meta-block an arm (i.e., an instance of RGA with its corresponding budget) is selected according to the well known Exp3 algorithm [Auer et al., 2002]. The RGA is then played for the next H time steps with optimally tuned restarts (see Theorem 3 for more details). At the end of the meta-block, the Exp3 observes a reward corresponding to the total reward accumulated by the chosen RGA in this meta-block. The intuition of this idea is that the meta-bandit will learn which budget will be the best upper bound for RGA. In what follows, we shall denote the set of arms available to the Exp3 algorithm by J , and denote the corresponding set of variation budgets by JB . The META-RGA algorithm uses dlog2(KT )e+ 1 meta-arms with budgets JB = {20, 21, . . . , 2dlog2(KT )e}. That is, the budget values are powers of 2 up to the smallest 2-power, which is still larger than KT , which is the ultimate upper bound of the Algorithm 3: Meta Repeating Greedy Algorithm (META-RGA) Input: T,K, γ ∈ (0, 1], batch length H . 1 Initialize: |J | = dlog2(KT )e+ 1, JB = {20, 21, . . . , 2dlog2(KT )e}, wi(1) = 1 for i = 1, . . . , |J |. 2 for τ = 1, . . . , ⌈ T H ⌉ do 3 Set pi(τ) = (1− γ) wi(τ)∑|J | j=1 wj(τ) + γ |J | i = 1, . . . , |J | 4 Draw iτ randomly according to the probabilities p1(τ), . . . , p|J |(τ) 5 Run RGA in batch τ with budget JB [iτ ] = 2iτ−1 and optimally tuned restarts 6 Receive reward xit(τ) ∈ [0, H] at the end of the batch 7 for j = 1, . . . , |J | do 8 x̂j(τ) = { xj(τ) pj(τ) if j = iτ 0 otherwise wj(τ + 1) = wj(τ)exp(γx̂j(τ)/(H|J |)) path variation budget (as BT ≤ KT ). In addition, let Bi denote the total path variance within batch i, and B̃ = maxiBi. We state the following: Theorem 4. Suppose that the variation budget BT is unknown in advance to us. In addition, suppose that the maximal blocking duration D̃ ≥ 1 such that D̃BT ∈ o(T ). The α-regret of RGA-META, where α = 1 1+D̃ , is at most O ( B̃1/2T 3/4(2D̃ +K)1/4 ln(KT )1/4 ln(ln(KT ))1/4 ) when the parameters of RGA-META are set as follows: H = √ T (2D̃ +K) ln(KT ) ln(ln(KT )) , γ = min { 1, √ ln(KT ) ln(ln(KT )) (e− 1)T } . Note that since B̃ ≤ HK by definition (the maximum path variance within a batch is at most HK), by setting H = √ T (2D̃+K) ln(KT ) ln(ln(KT )) we always get sub-linear regret in T if D̃ ∈ O(1) (i.e., is bounded above by a constant). Otherwise we need to have B̃2D̃ ∈ o(T ). Furthermore, when B̃ is small, our regret bound tends to O(T 3/4). Thus, it is still an open question whether we can get a tighter upper bound (e.g., O( √ T )) for this case (i.e., when the variation budget is unknown). 5 Discussions In this section we will provide some intuitions why we set BT and D̃ to be small in the previous sections. In particular, we show that if either the variation budget or the maximum blocking duration is large, the lower bound of the α-regret is Θ(T ). We also discuss a potential lower bound for the α-regret of the adversarial blocking bandit problem in the case of BT ∈ o(KT ) and D̃ ∈ O(1). Finally, we will also discuss how our results change if we use other types of variation budgets. Large variation budget. Consider the case when BT ∈ Θ(T ). Theorem 3 indicates that the upper bound of the α-regret is Θ(T ) where α = 1 1+D̃ as defined in Theorem 3. Indeed, we show that this is the best possible we can achieve: Claim 1. For any T > 0 and BT ∈ Θ(KT ), there exists a sequence of rewards and blocking durations X and D such thatRαπ(BT , D̃, T ) = Θ(T ) for that particular (X,D). Large blocking durations. If D̃ ∈ Θ(T ) and α is the approximation ratio of Greedy-BAA: Claim 2. For any T > 0 and D̃ ∈ Θ(T ), there exists a sequence of rewards and blocking durations X and D such thatRαπ(BT , D̃, T ) = Θ(T ) for that particular (X,D). Note that our regret bounds only make sense if D̃BT ∈ o(T ). Thus, it is still an open question whether we can achieve sub-linear α-regret bounds in T if both BT , D̃ ∈ o(T ) but D̃BT ∈ Ω(T ). Almost matching regret lower bound for RGA. Consider the case when D̃ = O(1). This implies that the α-regret bound of RGA is reduced to O( √ KTBT ). This in fact matches the known lower bounds of the 1-regret for the case of D̃ = 1 (i.e., no blocking) from the literature [Auer et al., 2019]. In particular, with D̃ = 1, the Greedy-BAA algorithm becomes optimal (see, e.g., Section 4.3 of Basu et al. [2019] for the discussion of this), and thus, the α-regret notion becomes 1-regret. Therefore, if there exists an algorithm which could achive an α-regret better thanO( √ KTBT ) in our setting, then it would be able to achieve O( √ KTBT ) 1-regret for the standard (i.e., non-blocking) adversarial bandit as well. It is also worth noting that when D̃ is not bounded above by a constant, or the variation budget BT is not known in advance, it is still not known what the regret lower bound would be. Other variation budget definitions. There are a number of different variation budget definitions in the literature [Besbes et al., 2014, Wei and Luo, 2018, Auer et al., 2019]. It is worth noting that our analysis works in a similar way for the maximum variation budget BmaxT and number of changes budget LT , which can be defined as follows: BmaxT = ∑ t,t+1∈T max k∈K |Xkt+1 −Xkt |, LT = #{t : 1 ≤ t ≤ T − 1,∃k : Xkt 6= Xkt+1} If we use these variation budgets instead, the regret in Theorem 3 will be modified to O (√ (2D̃ +K)TBmaxT ) and O (√ (2D̃ +K)TLT ) , respectively. Furthermore, the ap- proximation ratio of Greedy-BAA will also change. In particular, it becomes [ ( 1 + D̃ ) + D̃KBmaxT /r(π +) ]−1 and [ ( 1 + D̃ ) + D̃KLT /r(π +) ]−1 . We refer the reader to Section ?? in the appendix for a more detailed discussion. It remains as future work to derive regret bounds for the other variation budgets. Broader Impact The paper examines a novel multi-armed bandit problem in which the decision-making agent aims to receive as many (cumulative) rewards as possible over a finite period subject to constraints. Our focus and results are largely theoretical. In particular, our contributions advance our understanding of multi-armed bandit models and its theoretical limitations and benefit the general (theoretical) machine learning community, specifically the multi-armed bandit and online learning communities. In addition, we do not expect that our theoretical findings can be directly used in more applied domains. Acknowledgments and Disclosure of Funding Nicholas Bishop was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Partnership grant. Debmalya Mandal was supported by a Columbia Data Science Institute Post-Doctoral Fellowship.
1. What is the focus and contribution of the paper regarding multi-armed bandit problems with arm blocking? 2. What are the strengths of the proposed algorithms, particularly in the offline and online settings? 3. Do you have any concerns or questions about the paper's weaknesses, such as the dependence on the reward obtained by the Greedy algorithm in the offline approximation guarantee? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or trade-offs in the proposed approaches that the authors should discuss further?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors consider a multi-armed bandit problem with arm blocking (after each play) where delays and rewards are adversarial. They prove in the offline version, i.e. when the delays and rewards are known, computing the reward maximizing arm pull sequence is NP-hard. In the offline, setting they show the Greedy algorithm that plays the best available arm in each time achieves a non-trivial approximation ratio as function of various parameters of the system. In the online version, they consider the situation when the delays and rewards are unknown; but the path variance (B_T), maximum delay (D_up), number of arms (K), and time horizon (T) are known. In this setting, they design a repeating greedy algorithm (RGA) which is divided into a specific number of phases. In each phase, the algorithm samples all the arms once first. Then using these samples as the mean value plays greedily for the rest of the phase. This provides an alpha regret of O(sqrt((D_up+K) T B_T)) with alpha = O(1/D_up). Furthermore, when the path variance B_T is unknown they provide an EXP3 based meta-algorithm (meta-RGA) that provides an O(T^3/4) alpha regret for alpha = O(1/D_up). Strengths The author provides an interesting extension of the blocking bandits problem with adversarial delay and rewards. In the offline setting the hardness and approximation results seem to be novel. In the bandit setting, the algorithms is novel as it does not maintain a probability distribution over the arms. It simulates Greedy offline algorithm with means of the arms sampled frequently enough. The meta-algorithm eliminates the need for the knowledge of path variance (which is impractical). However, it suffers a larger regret. Weaknesses For the offline problem, I feel, the references are not adequate. The authors should more carefully look for similar setting in the literature of scheduling algorithms. Further, the authors should describe the relation with the adversarial bandits with budget constraints [1,4,5] (citations in the paper) more clearly (e.g. local vs global constraints). Edit: The response is mostly satisfactory. Mention the approximation ratio of the other related scheduling papers, if possible. The presence of the reward obtained by the Greedy algorithm in the offline approximation guarantee is not desirable. The authors should try and remove this dependence. Using some trivial lower bound on the Greedy algorithm is one way to proceed. Edit: Not entirely satisfactory, as without removing the dependence on $r(\pi^+)$ it is very hard to evaluate the results in a sense that is standard in literature. In fact, I think, the following more transparent ratio is achievable (please verify): (1 - \frac{\tildeD}{\utildeD} \frac{\tildeD B_T}{\mu^* T} ) / (1 + \frac{\tildeD}{\utildeD}) When the maximum and the minimum delay ratio is not big, and the path variance is small, then in the bandit version approximation guarantee is much weaker than the offline version. Is this unavoidable? Why does the analysis becomes loose? Edit: The response is not satisfactory as my comment was on the approximation ratio, not the regret guarantee in the online setting. The example in the response, still preserves the same approximation ratio in the online case. When B_T is small (e.g. O(1), log(T)), the phase length Delta_T is long. The authors should discuss why not keeping track of the rewards during each phase is a good idea. They may add some discussion if some other approach is desirable in this regime. Edit: The response clarifies my doubts for the above point.
NIPS
Title Adversarial Blocking Bandits Abstract We consider a general adversarial multi-armed blocking bandit setting where each played arm can be blocked (unavailable) for some time periods and the reward per arm is given at each time period adversarially without obeying any distribution. The setting models scenarios of allocating scarce limited supplies (e.g., arms) where the supplies replenish and can be reused only after certain time periods. We first show that, in the optimization setting, when the blocking durations and rewards are known in advance, finding an optimal policy (e.g., determining which arm per round) that maximises the cumulative reward is strongly NP-hard, eliminating the possibility of a fully polynomial-time approximation scheme (FPTAS) for the problem unless P = NP. To complement our result, we show that a greedy algorithm that plays the best available arm at each round provides an approximation guarantee that depends on the blocking durations and the path variance of the rewards. In the bandit setting, when the blocking durations and rewards are not known, we design two algorithms, RGA and RGA-META, for the case of bounded duration an path variation. In particular, when the variation budget BT is known in advance, RGA can achieve O( √ T (2D̃ +K)BT ) dynamic approximate regret. On the other hand, when BT is not known, we show that the dynamic approximate regret of RGA-META is at most O((K + D̃)B̃T ) where B̃ is the maximal path variation budget within each batch of RGA-META (which is provably in order of o( √ T ). We also prove that if either the variation budget or the maximal blocking duration is unbounded, the approximate regret will be at least Θ(T ). We also show that the regret upper bound of RGA is tight if the blocking durations are bounded above by an order of O(1). N/A RGA can achieve O( √ T (2D̃ +K)BT ) dynamic approximate regret. On the other hand, when BT is not known, we show that the dynamic approximate regret of RGA-META is at most O((K + D̃)1/4B̃1/2T 3/4) where B̃ is the maximal path variation budget within each batch of RGA-META (which is provably in order of o( √ T ). We also prove that if either the variation budget or the maximal blocking duration is unbounded, the approximate regret will be at least Θ(T ). We also show that the regret upper bound of RGA is tight if the blocking durations are bounded above by an order of O(1). 1 Introduction This paper investigates the blocking bandit model where pulling an arm results in having that arm blocked for a deterministic number of rounds. For example, consider the classical problem of online task allocation, in which new task requests arrive at each time step, waiting to be assigned to one of many servers [Karthik et al., 2017]. Once a server is allocated to a task, it starts working on it, and becomes unavailable for future tasks until that task is done. If there are no servers available or none is allocated to the task at its arrival, the request will not be served and leave the system forever. A more recent example comes from the domain of expert crowdsourcing (e.g., Upwork, Outsourcely, etc.). In this setting, a job requester can sequentially choose from a pool of workers and allocate a short-term job/project to the worker [Ho and Vaughan, 2012, Tran-Thanh et al., 2014]. The stochastic version of this problem, where the rewards are randomly drawn from a distribution in an 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. i.i.d. manner, with the constraint that the blocking durations are fixed per arm over time, has been studied in [Basu et al., 2019] and [Basu et al., 2020]. However, in many applications, the stochastic setting is too restrictive and not realistic. For example, in the online task allocation problem, the tasks can be heterogeneous, and both the value and the serving time of the tasks can vary over time in an arbitrary manner. Furthermore, in the expert crowdsourcing setting, the time and quality workers need to deliver the job are unknown in advance, can vary over time, and do not necessarily follow an i.i.d. stochastic process. These examples demonstrate that for many real-world situations, the stochastic blocking bandit model is not an appropriate choice. To overcome this issue, in this paper we propose the adversarial blocking bandit setting, where both the sequence of rewards and blocking durations per arm can be arbitrary. While the literature of adversarial bandits is enormous, to the best of our knowledge, this is the first attempt to address the effect of blocking in adversarial models. In particular, we are interested in a setting where the rewards are neither sampled i.i.d., nor maliciously chosen in an arbitrary way. Instead, in many real-world systems, the change in the value of rewards is rather slow or smooth over time (e.g., in the online task allocation problem, similar tasks usually arrive in batch, or in the crowdsourcing system, workers may have periods when they perform consistently, and thus, their performance slowly varies over time). To capture this, we assume that there is a path variation budget which controls the change of the rewards over time 1. 1.1 Main Contributions In this paper, apart from the adversarial blocking bandit setting, we also investigate two additional versions of the model: (i) The offline MAXREWARD problem, where all the rewards and blocking durations are known in advance; and (ii) the online version of MAXREWARD, in which we see the corresponding rewards and blocking durations of the arms at each time step before we choose an arm to pull. Our main findings can be summarised as follows: 1. We prove that the offline MAXREWARD problem is strongly NP-hard (Theorem 1). Note that this result is stronger than the computational hardness result in Basu et al. [2019], which depends on the correctness of the randomised exponential time hypothesis. 2. We devise a provable approximation ratio for a simple online greedy algorithm, Greedy-BAA, for the online MAXREWARD problem (Theorem 2). Our approximation ratio, when applied to the stochastic blocking bandit model with fixed blocking durations, is slightly weaker than that of Basu et al. [2019]. However, it is more generic, as it can be applied to any arbitrary sequence of rewards and blocking durations. 3. For the bandit setting, we consider the case when both the maximal blocking duration and the path variance are bounded, and propose two bandit algorithms: • We design RGA for the case of known path variation budget BT . In particular, we show that RGA can provably achieve O (√ T (2D̃ +K)BT ) regret, where T is the time horizon, K is the number of arms, D̃ is the maximum blocking duration, and the regret is computed against the performance of Greedy-BAA (Theorem 3). • For the case of unknown path variation budget BT , we propose RGA-META that uses Exp3 as a meta-bandit algorithm to learn an appropriate path variation budget and runs RGA with it. We prove that RGA-META achieves O((K + D̃)1/4B̃1/2T 3/4) regret bound where B̃ is the maximal path variance within a single batch of the algorithm, and is in order of O( √ T ) in the worst case (Theorem 4). 4. Finally, we also discuss a number of regret lower bound results. In particular, we show that if either BT or D̃ is in Θ(T ) (or unbounded), then the regret is at least Θ(T ) (Claims 1 and 2). We also discuss that if D̃ ∈ O(1), then there is a matching lower bound for the regret of RGA (Section 5). 1We will show in Section 5 that bounded variation budgets are necessary to achieve sub-linear regrets. 1.2 Related Work Stochastic Blocking Bandits. The most relevant work to our setting is the stochastic blocking bandit model. As mentioned before, Basu et al. [2019] introduce and study this model where the reward per each time period is generated from a stochastic distribution with mean µk reward for each arm k and the blocking duration is fixed across all time period for each arm k (e.g., Dkt = D k for all t and k). In the optimization setting where the mean rewards and blocking durations are known, they consider a simpler version of the MAXREWARD problem for their setting and show that the problem is as hard as the PINWHEEL Scheduling on dense instances [Jacobs and Longo, 2014] and provide that a simple greedy algorithm (see Algorithm 1) achieves an approximation ratio of (1− 1/e−O(1/T )) where T is total time period. In the bandit setting, they provide lower and upper regret bounds that depend on the number of arms, mean rewards, and log(T ). A very recent work [Basu et al., 2020] extends the stochastic blocking bandit to a contextual setting where a context is sampled according to a distribution each time period and the reward per arm is drawn from a distribution with the mean depending on the pulled arm and the given context. Similar to the work of Basu et al. [2019], Basu et al. [2020] derive an online algorithm with an approximation ratio that depends on the maximum blocking durations and provide upper and lower α-regret bounds of O(log T ) and Ω(log T ), respectively. However, the results from this models cannot be directly applied to the adversarial setting due to the differences between the stochastic and adversarial reward generation schemes. Budgeted and Knapsack Bandits. Since the underlying offline optimisation problem of our setting, MAXREWARD, can also be casted as an instance of the multiple-choice multidimensional knapsack problem, it is also worth mentioning the line of work in the bandit literature that solve online knapsack problems with bandit feedback. In these models, the pull of an arm requires the consumption of resources in d ≥ 1 dimensions. The resource per arm is given either stochastic or adversarially in each time period and a (non replenishable) total budget B = (B1, ..., Bd) is available at the initial time period. The one-dimensional stochastic version of this setting is first studied in Tran-Thanh et al. [2010, 2012], Ding et al. [2013] under the name budgeted bandits, and is later extended to multiple dimensions (a.k.a. bandits with knapsack) by Badanidiyuru et al. [2013], Agrawal and Devanur [2014], Badanidiyuru et al. [2014]. More recently, Rangi et al. [2019] and Immorlica et al. [2019] initiate the study of adversarial knapsack bandits. Rangi et al. [2019] consider the d = 1 setting with a regret benchmark that is measured based on the best fixed-arm’s reward to cost ratio. Under such a regret benchmark, they show that sub-linear regret (with respect to B and k) is possible in both the stochastic and adversarial settings. Immorlica et al. [2019] consider the d ≥ 1 setting with a regret benchmark that is defined to be the ratio of the expected reward of the best fixed distribution over arms and the policy’s expected reward. show that the ratio is at least Ω(log T ). However, none of the techniques developed in these work can be applied to our setting, due to the following reason: The results in the knapsack bandit models typically assume that the pulling costs are bounded above by a constant, and the budget is significantly larger than this constant to allow sufficient exploration. In contrast, when MAXREWARD is conversed into a knapsack model, many of its dimensions will have a budget of 1, and the corresponding pulling cost for that dimension is also 1 (due to the blocking condition). Other Settings with Arm Availability Constraints. Other bandit models with arm availability cosntrainsts include the mortal bandits [Chakrabarti et al., 2009], sleeping bandits [Kleinberg et al., 2010, Kale et al., 2016], bandits with stochastic action sets [Neu and Valko, 2014], and combinatorial semi-bandits [Neu and Bartók, 2016]. We refer readers to [Basu et al., 2019] for a discussion of these models, including the relevance of the blocking bandit setting to online Markov decision processes. Connection to the scheduling literature. Notice that there is a strong connection between MAXREWARD and the interval scheduling problems. In particular, the MAXREWARD problem belongs to the class of fixed interval scheduling problems with arbitrary weight values, no preemption, and machine dependent processing time (see e.g., Kolen et al. [2007] for a comprehensive survey). This is one of the most general, and thus, hardest versions of the fixed interval scheduling literature (see, e.g., Kovalyov et al. [2007] for more details). In particular, MAXREWARD is a special case of this setting where for each task, the starting point of the feasible processing interval is equal to the arrival time. Note that to date, provable performance guarantees for fixed interval scheduling problems with arbitrary weight values only exist in offline, online but preemptive, or settings with some special uniformity assumptions (e.g., [Erlebach and Spieksma, 2000, Miyazawa and Erlebach, 2004, Bender et al., 2017, Yu and Jacobson, 2020]). Therefore, to our best knowledge, Theorem 2 in our paper is the first result which provides provable approximation ratio for a deterministic algorithm in an online non-preemptive setting. Note that with some modifications, our proof can also be extended to the general online non-preemptive setting, i.e., online interval scheduling with arbitrary weight values, no preemption, and machine dependent processing time. 2 Preliminaries Adversarial blocking bandit. In this paper we consider the following bandit setting. Let K = {1, . . . ,K} be the set of K arms. Let T = {1, . . . , T} denote a sequence of T time steps, or decision points faced by a decision maker. At every time step t ∈ T , the decision maker may pull one of the K arms. When pulling an arm k ∈ K at time step t ∈ T , the reward Xkt ∈ [0, 1] is obtained. In addition, the pulled arm k is deterministically blocked and cannot be pulled for the next (Dkt − 1) time steps for some integer blocking duration Dkt ∈ Z+. We also use the notation ∅ to denote the action of not pulling an arm. In which case, X∅t = 0 and D ∅ t = 1 for each time step t. We denote by Xk the sequence of rewards over T time steps associated with an arm k ∈ K such that Xk = {Xkt }Tt=1. In addition, we denote by X the sequence of vectors of all K rewards such that X = {Xk}Kk=1. Similarly, we denote by Dk = {Dkt }Tt=1 the sequence of blocking durations over T time steps associated with an arm k and denote by D = {Dk}Kk=1 the sequence of vectors of all K blocking duration vectors. In our model, the rewards and blocking durations of each arm can change an arbitrary number of times. We let D̃ ( ˜ D) be the maximal blocking duration (minimal blocking duration) which is the upper bound (lower bound) of the largest (smallest) possible blocking duration. We denote by D = {1, . . . , D̃}K×T the set of all blocking duration vector sequences which are upper bounded by D̃. Note that D is defined with respect to minimal blocking duration ˜ D = 1. It is sometime be useful to define D for arbitrarily lower bound ˜ D. Bounded path variation. Motivated by and adapted from a recent line of work in the bandit literature (e.g., [Besbes et al., 2014]), we assume that there is a path variation budget on the sequence of the rewards. In particular, the definition of path variation on the sequence of the rewards is defined to be T−1∑ t=1 K∑ k=1 ∣∣Xkt+1 −Xkt ∣∣ . We refer toBT as the path variation budget over T . We define the corresponding temporal uncertainty set as the set of reward vector sequences which satisfy the variation budget over the set of time steps {1, . . . , T}: B = { X ∈ [0, 1]K×T : T−1∑ t=1 K∑ k=1 ∣∣Xkt −Xkt+1∣∣ ≤ BT } Note that by setting BT = KT we can recover the standard unbounded version of our bandit model (as all the rewards are from [0, 1]). Note that our analysis also works for other variation budgets such as the maximum variation [Besbes et al., 2014] or the number of changes budgets [Auer et al., 2019]. See Section 5 for a more detailed discussion. Arm pulling policy. Let U be a random variable defined over a probability space (U,U ,Pu) Let π1 : U→ K and πt : [0, 1]t−1 × {1, . . . , D̃}t−1 × U→ K for t = 2, 3, . . . be measurable functions With some abuse of notation we denote by πt ∈ K the arm chosen at time t, that is given by πt = { π1(U) t = 1 πt(X π t−1, . . . , X π 1 , D π t−1, . . . , D π 1 , U) t = 2, 3, . . . Here Xπt (resp. D π t ) denotes the reward (resp. blocking duration) observed by the policy π at time t The mappings {πt : t = 1, . . . , T} together with the distribution Pu define the class of policies We define the class P of admissible policies to be those, at every time step, which choose an action which is not blocked. That is, P = { (π1, . . . , πT ) : πt /∈ {πj : j +D πj j − 1 ≥ t, ∀j ≤ t− 1}, ∀t ∈ {1, . . . , T}, X ∈ B, D ∈ D } . Algorithm 1: Greedy-BAA Input : T , K, {Xkt }k∈K,t∈T , {Dkt }k∈K,t∈T - An instance of the MAXREWARD Problem Output : π+ = (π+1 , π + 2 , ..., π + T ) ∈ P - A greedy solution to the MAXREWARD Problem 1 π+ = (∅, ..., ∅); 2 for j ← 1 to T do 3 Select π+j ∈ arg maxkj∈Aj(π+1 ,...,π+j−1)∪∅X kj j # See the preliminary section for definitions 4 end 5 return π+ In addition, let At(π1, . . . , πt−1) = K \ {πj : j + D πj j − 1 ≥ t, ∀j ≤ t − 1} denote the set of available arms at time step t (we will also use At for the sake of brevity). Objective. The cumulative reward of a policy π ∈ P is defined to be r(π) = ∑T t=1X π t where Xπt is the reward obtained by policy π at time step t. Our objective is to find π ∗ ∈ P such that π∗ ∈ arg maxπ∈P Eπ[r(π)], where the expectation is over all possible randomisation coming from policy π. Feedback. The difficulty of the optimisation problem depends on the information (or the feedback) we have about the rewards and blocking durations of the arms. In this paper, we consider three feedback models in increasing order of difficulty. In the simplest setting, we know the value of all Xkt and Dkt in advance. We refer to this setting as the (offline) MAXREWARD optimization problem. In the online version of MAXREWARD, we assume that Xkt and D k t are not known in advance, but at each time step t, the value of Xkt and D k t for all k at that particular time step t is revealed before we choose any arm to pull. Finally, in the (classical) bandit setting, we assume that only the reward and blocking duration of the chosen arms are revealed after that arm is pulled2. We will refer to third model as the adversarial blocking bandit problem. 3 The Offline and Online MAXREWARD Problems We start with the analysis of the offline and online MAXREWARD problems. As a slight preview of the next subsections, computing an optimal solution of the offline MAXREWARD problem is strongly NP-hard even with bounded variation budget. Such result eliminates the possibility of a fully polynomial-time approximation scheme (FPTAS) for the problem unless P = NP. In addition, for the online MAXREWARD problem, we design an online greedy algorithm with provable approximation guarantee. 3.1 The Computational Complexity of the Offline MAXREWARD Problem To show that the MAXREWARD problem is strongly NP-hard, we reduce from the Boolean satisfiability problem with three literals per clause (3-SAT), which is known to be strongly NP-complete [Garey and Johnson, 1979]. In a 3-SAT instance, we are given m variables and n clauses. Each clause consists of three literals, and each literal is either a variable or the negation of the variable. The problem is to determine if there is a boolean true/false assignment to each variable so that the given 3-SAT instance is true (i.e., each clause contains at least one true literal). Theorem 1. Computing an optimal solution for the MAXREWARD problem is strongly NP-hard. The hardness result holds even when the path variation is bounded. 3.2 Online MAXREWARD Problem with Bounded Variation Budget In this section, we consider the online version of MAXREWARD. We devise a simple online greedy algorithm, Greedy Best Available Arm (Greedy-BAA), in which, at each time step, the algorithm plays an available arm with the highest reward. Algorithm 1 provides a detail description of Greedy-BAA. 2In this paper, due to space limits, we do not deal with the full information feedback model, in which the reward and blocking duration values of all the arms are revealed at each time step after the pull. Below, we show that Greedy-BAA provides an approximation guarantee to the offline MAXREWARD problem that depends on the blocking durations and the variation budget. Theorem 2. Let k∗ = arg maxk Dkmax Dkmin denote the arm with the highest max-min blocking duration ratio. Let π+ denote the solution returned by Greedy-BAA, and π∗ denote an optimal solution of the offline MAXREWARD problem, respectively. We state that:( 1 + Dk ∗ max Dk ∗ min ) r(π+) + Dk ∗ max Dk ∗ min BT ≥ r(π∗), That is, Greedy-BAA has an approximation ratio of ( 1 + Dk ∗ max Dk ∗ min )−1 ( 1− D k∗ maxBT Dk ∗ minr(π ∗) ) . Note that as Dk ∗ min ≥ ˜ D and Dk ∗ max ≤ D̃, the approximation ratio above can be further bounded above by ( 1 + D̃ ˜ D )−1 ( 1− D̃BT ˜ Dr(π∗) ) . Comparison to the result of Basu et al. [2019]. We note that Basu et al. [2019] has studied the MAXREWARD problem with path variation budget BT = 0 (i.e., the reward values are fixed over time) and homogeneous blocking durations per arm (i.e., when the blocking duration per arm do not change over time). In that case, our proof provides an approximation ratio of 1/2 whereas Basu et al. [2019] provides an approximation ratio of O(1− 1/e−O(1/T )). Their technique uses a much complicated LP-bounding technique/proof that does not directly generalize to the case of BT > 0 with varying blocking durations. On the other hand, our approximation ratio result holds for the general case. For example, if BT grows slower than r(π+) with T , our algorithm guarantees an approximation ratio of (1 + 2 D̃ ˜ D ) −1. 4 The Adversarial Blocking Bandit Problem Given the investigation of the (offline and online) MAXREWARD problems in the previous section, we now turn to the main focus of our paper, namely the online MAXREWARD problem with bandit feedback, a.k.a the adversarial blocking bandit problem. While the regret analyses are typically done by benchmarking against the best fixed policy in hindsight, we can easily show that in our setting, this benchmark would perform arbitrarily poorly, compared to the offline optimal solution. Therefore, instead of following the standard regret analysis, we are interested in comparing the performance of the designed algorithms to that of the offline optimal solution. Therefore, we will use the following regret definition: Dynamic approximate regret. We compare the performance of a policy with respect to the dynamic oracle algorithm that returns the offline optimal solution of MAXREWARD .We define the α-regret under a policy π ∈ P as the worst case difference between an (offline) α-optimal sequence of actions and the expected performance under policy π. More precisely, let π∗ denote the arm pulling policy of that dynamic oracle algorithm. The α-regret of a policy π ∈ P against π∗ is defined to be Rαπ(BT , D̃, T ) = αr(π∗)− E[r(π)] where the expectation is over all the possible randomisation of π. Note that this regret notion is stronger than the regret against the best fixed policy in hindsight, as it is easy to show that the best fixed policy can perform arbitrarily badly, compared to π∗. 4.1 Blocking Bandit with Known Path Variation Budget We now turn to describe our new bandit algorithm, RGA, designed for the adversarial blocking bandit problem. This algorithm can be described as follows: 1. We split the time horizon T into batches T1, . . . , Tm of size ∆T each (except possibly the last batch): Tj = {t ∈ {1, . . . ,∆T } : (j − 1)∆T + t ≤ min {j∆T , T}} , for all j = 1, . . . ,m where m = ⌈ T ∆T ⌉ is the number of batches. Algorithm 2: Repeating Greedy Algorithm (RGA) Input: ∆T . 1 while 1 ≤ j ≤ ⌈ T ∆T ⌉ do 2 Set τ = 1 3 while τ ≤ ∆T do 4 if (1 ≤ τ ≤ K) then 5 Pull arm k = τ mod K + 1 6 Receive reward and blocking duration (Xkτ , D k τ ) 7 Set X̂kt = X k τ for all t ∈ [1,∆T ]. 8 if (K + 1 ≤ τ ≤ D̃ +K) then 9 Pull no arms 10 if (D̃ +K + 1 ≤ τ ≤ ∆T − D̃) then 11 Pick arms according to GREEDY-BAA(∆T − 2D̃ −K,K, X̂1, . . . , X̂K , D1, . . . , DK) 12 if (∆T − D̃ + 1 ≤ τ ≤ ∆T ) then 13 Pull no arms 14 τ ← τ + 1 15 j ← j + 1 2. Within each batch we spend the first K rounds pulling each arm. Without loss of generality, we shall assume that arm k is pulled on round k. After this we spend the next D̃ rounds pulling no arms. This ensures that all arms will be available when we next pull an arm. 3. Then, up until the final D̃ rounds we play Greedy-BAA using the rewards observed in the first K rounds as the fixed rewards for each arm. 4. In the final D̃ rounds of each batch, we again pull no arms. This ensures that all of the arms are available at the beginning of the next batch. Theorem 3. Suppose that the variation budgetBT is known in advance and maximal duration D̃ ≥ 1 such that D̃BT ∈ o(T ). The α-regret of RGA, where α = ˜DD̃+ ˜ D , is at most O (√ T (2D̃ +K)BT ) when the parameter when ∆T is set to ⌈√ (T+1)(2D̃+K) 2BT ⌉ . Note that this bound is sub-linear in T if D̃BT = o(T ) (e.g., D̃ is bounded above by a constant and BT ∈ o(T )). It is also worth noting that while α = 11+D̃ might imply that RGA can perform better than the worst-case performance of Greedy-BAA, with BT ∈ o(T ) it is not the case (see Section ?? in the appendix for more details). 4.2 Blocking Bandit with Unknown Path Variation Budget Note that RGA requires knowledge of BT in order to properly set ∆T . To resolve this issue we propose META-RGA, a meta-bandit algorithm, where each arm corresponds to an instance of the RGA algorithm whose ∆T parameter tuned for a different variation budget. The time horizon T is broken into meta-blocks of length H . At the start of each meta-block an arm (i.e., an instance of RGA with its corresponding budget) is selected according to the well known Exp3 algorithm [Auer et al., 2002]. The RGA is then played for the next H time steps with optimally tuned restarts (see Theorem 3 for more details). At the end of the meta-block, the Exp3 observes a reward corresponding to the total reward accumulated by the chosen RGA in this meta-block. The intuition of this idea is that the meta-bandit will learn which budget will be the best upper bound for RGA. In what follows, we shall denote the set of arms available to the Exp3 algorithm by J , and denote the corresponding set of variation budgets by JB . The META-RGA algorithm uses dlog2(KT )e+ 1 meta-arms with budgets JB = {20, 21, . . . , 2dlog2(KT )e}. That is, the budget values are powers of 2 up to the smallest 2-power, which is still larger than KT , which is the ultimate upper bound of the Algorithm 3: Meta Repeating Greedy Algorithm (META-RGA) Input: T,K, γ ∈ (0, 1], batch length H . 1 Initialize: |J | = dlog2(KT )e+ 1, JB = {20, 21, . . . , 2dlog2(KT )e}, wi(1) = 1 for i = 1, . . . , |J |. 2 for τ = 1, . . . , ⌈ T H ⌉ do 3 Set pi(τ) = (1− γ) wi(τ)∑|J | j=1 wj(τ) + γ |J | i = 1, . . . , |J | 4 Draw iτ randomly according to the probabilities p1(τ), . . . , p|J |(τ) 5 Run RGA in batch τ with budget JB [iτ ] = 2iτ−1 and optimally tuned restarts 6 Receive reward xit(τ) ∈ [0, H] at the end of the batch 7 for j = 1, . . . , |J | do 8 x̂j(τ) = { xj(τ) pj(τ) if j = iτ 0 otherwise wj(τ + 1) = wj(τ)exp(γx̂j(τ)/(H|J |)) path variation budget (as BT ≤ KT ). In addition, let Bi denote the total path variance within batch i, and B̃ = maxiBi. We state the following: Theorem 4. Suppose that the variation budget BT is unknown in advance to us. In addition, suppose that the maximal blocking duration D̃ ≥ 1 such that D̃BT ∈ o(T ). The α-regret of RGA-META, where α = 1 1+D̃ , is at most O ( B̃1/2T 3/4(2D̃ +K)1/4 ln(KT )1/4 ln(ln(KT ))1/4 ) when the parameters of RGA-META are set as follows: H = √ T (2D̃ +K) ln(KT ) ln(ln(KT )) , γ = min { 1, √ ln(KT ) ln(ln(KT )) (e− 1)T } . Note that since B̃ ≤ HK by definition (the maximum path variance within a batch is at most HK), by setting H = √ T (2D̃+K) ln(KT ) ln(ln(KT )) we always get sub-linear regret in T if D̃ ∈ O(1) (i.e., is bounded above by a constant). Otherwise we need to have B̃2D̃ ∈ o(T ). Furthermore, when B̃ is small, our regret bound tends to O(T 3/4). Thus, it is still an open question whether we can get a tighter upper bound (e.g., O( √ T )) for this case (i.e., when the variation budget is unknown). 5 Discussions In this section we will provide some intuitions why we set BT and D̃ to be small in the previous sections. In particular, we show that if either the variation budget or the maximum blocking duration is large, the lower bound of the α-regret is Θ(T ). We also discuss a potential lower bound for the α-regret of the adversarial blocking bandit problem in the case of BT ∈ o(KT ) and D̃ ∈ O(1). Finally, we will also discuss how our results change if we use other types of variation budgets. Large variation budget. Consider the case when BT ∈ Θ(T ). Theorem 3 indicates that the upper bound of the α-regret is Θ(T ) where α = 1 1+D̃ as defined in Theorem 3. Indeed, we show that this is the best possible we can achieve: Claim 1. For any T > 0 and BT ∈ Θ(KT ), there exists a sequence of rewards and blocking durations X and D such thatRαπ(BT , D̃, T ) = Θ(T ) for that particular (X,D). Large blocking durations. If D̃ ∈ Θ(T ) and α is the approximation ratio of Greedy-BAA: Claim 2. For any T > 0 and D̃ ∈ Θ(T ), there exists a sequence of rewards and blocking durations X and D such thatRαπ(BT , D̃, T ) = Θ(T ) for that particular (X,D). Note that our regret bounds only make sense if D̃BT ∈ o(T ). Thus, it is still an open question whether we can achieve sub-linear α-regret bounds in T if both BT , D̃ ∈ o(T ) but D̃BT ∈ Ω(T ). Almost matching regret lower bound for RGA. Consider the case when D̃ = O(1). This implies that the α-regret bound of RGA is reduced to O( √ KTBT ). This in fact matches the known lower bounds of the 1-regret for the case of D̃ = 1 (i.e., no blocking) from the literature [Auer et al., 2019]. In particular, with D̃ = 1, the Greedy-BAA algorithm becomes optimal (see, e.g., Section 4.3 of Basu et al. [2019] for the discussion of this), and thus, the α-regret notion becomes 1-regret. Therefore, if there exists an algorithm which could achive an α-regret better thanO( √ KTBT ) in our setting, then it would be able to achieve O( √ KTBT ) 1-regret for the standard (i.e., non-blocking) adversarial bandit as well. It is also worth noting that when D̃ is not bounded above by a constant, or the variation budget BT is not known in advance, it is still not known what the regret lower bound would be. Other variation budget definitions. There are a number of different variation budget definitions in the literature [Besbes et al., 2014, Wei and Luo, 2018, Auer et al., 2019]. It is worth noting that our analysis works in a similar way for the maximum variation budget BmaxT and number of changes budget LT , which can be defined as follows: BmaxT = ∑ t,t+1∈T max k∈K |Xkt+1 −Xkt |, LT = #{t : 1 ≤ t ≤ T − 1,∃k : Xkt 6= Xkt+1} If we use these variation budgets instead, the regret in Theorem 3 will be modified to O (√ (2D̃ +K)TBmaxT ) and O (√ (2D̃ +K)TLT ) , respectively. Furthermore, the ap- proximation ratio of Greedy-BAA will also change. In particular, it becomes [ ( 1 + D̃ ) + D̃KBmaxT /r(π +) ]−1 and [ ( 1 + D̃ ) + D̃KLT /r(π +) ]−1 . We refer the reader to Section ?? in the appendix for a more detailed discussion. It remains as future work to derive regret bounds for the other variation budgets. Broader Impact The paper examines a novel multi-armed bandit problem in which the decision-making agent aims to receive as many (cumulative) rewards as possible over a finite period subject to constraints. Our focus and results are largely theoretical. In particular, our contributions advance our understanding of multi-armed bandit models and its theoretical limitations and benefit the general (theoretical) machine learning community, specifically the multi-armed bandit and online learning communities. In addition, we do not expect that our theoretical findings can be directly used in more applied domains. Acknowledgments and Disclosure of Funding Nicholas Bishop was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Partnership grant. Debmalya Mandal was supported by a Columbia Data Science Institute Post-Doctoral Fellowship.
1. What is the focus and contribution of the paper regarding bandit settings? 2. What are the strengths of the proposed approach, particularly in terms of its comprehensiveness in treating various feedback models? 3. What are the weaknesses of the paper, especially regarding its hardness result and the bound on the maximum blocking length? 4. Do you have any concerns regarding the assumptions made in the paper's setting and models? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper considers a bandit setting where pulling an arm makes the arm unavailable for a number of rounds. The setting has adversarial rewards that satisfy a condition called “bounded path variation” which limits how much the reward of an arm can change from timestep to timestep (in aggregate). The paper considers 3 feedback models: everything being known in advance, the rewards and block durations being known at the start of a round, and the reward and block duration of the pulled arm only being revealed after pulling the arm. The authors show that the offline problem (first feedback model) is NP-complete when blocking periods are O(T). This motivates looking for approximation algorithms for the other feedback models. For the second feedback model, the authors present a simple greedy algorithm that linear in the ratio of largest to smallest blocking length. Finally the authors present 2 algorithms for the final feedback model (one for known path variation, and one for unknown path variation) and they show these are vanishing-alpha-regret algorithms with alpha linear in block duration and the remaining regret vanishing as sqrt. Strengths 1. The adversarial reward model (with bounded path variation) seems like a nice model to prove results for, and I appreciate that the authors point out in section 5 how results from this model translate to related models like “maximum variation budget”. 2. The authors are comprehensive in their treatment of feedback models and present solutions for all of them. Weaknesses 1. [See Additional Feedback for post-rebuttal comments] The hardness result for the offline problem relies on blocking arms for the entire remaining duration. Presumably this isn’t the typical type of MAXREWARD instance that we’re primarily interested in (especially since the algorithms presented also break down in this case, see e.g. line 282). It’d be nice if we could say something about the case where blocking periods are bounded, e.g. by O(sqrt(T)). 2. For Thm 2 (and section 4), it seems this bound could be quite large (i.e. O(max blocking length)). It would be nice to show whether this is necessary (i.e. is there a lower bound), or if this is a function of the chosen algorithm.
NIPS
Title Adversarial Blocking Bandits Abstract We consider a general adversarial multi-armed blocking bandit setting where each played arm can be blocked (unavailable) for some time periods and the reward per arm is given at each time period adversarially without obeying any distribution. The setting models scenarios of allocating scarce limited supplies (e.g., arms) where the supplies replenish and can be reused only after certain time periods. We first show that, in the optimization setting, when the blocking durations and rewards are known in advance, finding an optimal policy (e.g., determining which arm per round) that maximises the cumulative reward is strongly NP-hard, eliminating the possibility of a fully polynomial-time approximation scheme (FPTAS) for the problem unless P = NP. To complement our result, we show that a greedy algorithm that plays the best available arm at each round provides an approximation guarantee that depends on the blocking durations and the path variance of the rewards. In the bandit setting, when the blocking durations and rewards are not known, we design two algorithms, RGA and RGA-META, for the case of bounded duration an path variation. In particular, when the variation budget BT is known in advance, RGA can achieve O( √ T (2D̃ +K)BT ) dynamic approximate regret. On the other hand, when BT is not known, we show that the dynamic approximate regret of RGA-META is at most O((K + D̃)B̃T ) where B̃ is the maximal path variation budget within each batch of RGA-META (which is provably in order of o( √ T ). We also prove that if either the variation budget or the maximal blocking duration is unbounded, the approximate regret will be at least Θ(T ). We also show that the regret upper bound of RGA is tight if the blocking durations are bounded above by an order of O(1). N/A RGA can achieve O( √ T (2D̃ +K)BT ) dynamic approximate regret. On the other hand, when BT is not known, we show that the dynamic approximate regret of RGA-META is at most O((K + D̃)1/4B̃1/2T 3/4) where B̃ is the maximal path variation budget within each batch of RGA-META (which is provably in order of o( √ T ). We also prove that if either the variation budget or the maximal blocking duration is unbounded, the approximate regret will be at least Θ(T ). We also show that the regret upper bound of RGA is tight if the blocking durations are bounded above by an order of O(1). 1 Introduction This paper investigates the blocking bandit model where pulling an arm results in having that arm blocked for a deterministic number of rounds. For example, consider the classical problem of online task allocation, in which new task requests arrive at each time step, waiting to be assigned to one of many servers [Karthik et al., 2017]. Once a server is allocated to a task, it starts working on it, and becomes unavailable for future tasks until that task is done. If there are no servers available or none is allocated to the task at its arrival, the request will not be served and leave the system forever. A more recent example comes from the domain of expert crowdsourcing (e.g., Upwork, Outsourcely, etc.). In this setting, a job requester can sequentially choose from a pool of workers and allocate a short-term job/project to the worker [Ho and Vaughan, 2012, Tran-Thanh et al., 2014]. The stochastic version of this problem, where the rewards are randomly drawn from a distribution in an 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. i.i.d. manner, with the constraint that the blocking durations are fixed per arm over time, has been studied in [Basu et al., 2019] and [Basu et al., 2020]. However, in many applications, the stochastic setting is too restrictive and not realistic. For example, in the online task allocation problem, the tasks can be heterogeneous, and both the value and the serving time of the tasks can vary over time in an arbitrary manner. Furthermore, in the expert crowdsourcing setting, the time and quality workers need to deliver the job are unknown in advance, can vary over time, and do not necessarily follow an i.i.d. stochastic process. These examples demonstrate that for many real-world situations, the stochastic blocking bandit model is not an appropriate choice. To overcome this issue, in this paper we propose the adversarial blocking bandit setting, where both the sequence of rewards and blocking durations per arm can be arbitrary. While the literature of adversarial bandits is enormous, to the best of our knowledge, this is the first attempt to address the effect of blocking in adversarial models. In particular, we are interested in a setting where the rewards are neither sampled i.i.d., nor maliciously chosen in an arbitrary way. Instead, in many real-world systems, the change in the value of rewards is rather slow or smooth over time (e.g., in the online task allocation problem, similar tasks usually arrive in batch, or in the crowdsourcing system, workers may have periods when they perform consistently, and thus, their performance slowly varies over time). To capture this, we assume that there is a path variation budget which controls the change of the rewards over time 1. 1.1 Main Contributions In this paper, apart from the adversarial blocking bandit setting, we also investigate two additional versions of the model: (i) The offline MAXREWARD problem, where all the rewards and blocking durations are known in advance; and (ii) the online version of MAXREWARD, in which we see the corresponding rewards and blocking durations of the arms at each time step before we choose an arm to pull. Our main findings can be summarised as follows: 1. We prove that the offline MAXREWARD problem is strongly NP-hard (Theorem 1). Note that this result is stronger than the computational hardness result in Basu et al. [2019], which depends on the correctness of the randomised exponential time hypothesis. 2. We devise a provable approximation ratio for a simple online greedy algorithm, Greedy-BAA, for the online MAXREWARD problem (Theorem 2). Our approximation ratio, when applied to the stochastic blocking bandit model with fixed blocking durations, is slightly weaker than that of Basu et al. [2019]. However, it is more generic, as it can be applied to any arbitrary sequence of rewards and blocking durations. 3. For the bandit setting, we consider the case when both the maximal blocking duration and the path variance are bounded, and propose two bandit algorithms: • We design RGA for the case of known path variation budget BT . In particular, we show that RGA can provably achieve O (√ T (2D̃ +K)BT ) regret, where T is the time horizon, K is the number of arms, D̃ is the maximum blocking duration, and the regret is computed against the performance of Greedy-BAA (Theorem 3). • For the case of unknown path variation budget BT , we propose RGA-META that uses Exp3 as a meta-bandit algorithm to learn an appropriate path variation budget and runs RGA with it. We prove that RGA-META achieves O((K + D̃)1/4B̃1/2T 3/4) regret bound where B̃ is the maximal path variance within a single batch of the algorithm, and is in order of O( √ T ) in the worst case (Theorem 4). 4. Finally, we also discuss a number of regret lower bound results. In particular, we show that if either BT or D̃ is in Θ(T ) (or unbounded), then the regret is at least Θ(T ) (Claims 1 and 2). We also discuss that if D̃ ∈ O(1), then there is a matching lower bound for the regret of RGA (Section 5). 1We will show in Section 5 that bounded variation budgets are necessary to achieve sub-linear regrets. 1.2 Related Work Stochastic Blocking Bandits. The most relevant work to our setting is the stochastic blocking bandit model. As mentioned before, Basu et al. [2019] introduce and study this model where the reward per each time period is generated from a stochastic distribution with mean µk reward for each arm k and the blocking duration is fixed across all time period for each arm k (e.g., Dkt = D k for all t and k). In the optimization setting where the mean rewards and blocking durations are known, they consider a simpler version of the MAXREWARD problem for their setting and show that the problem is as hard as the PINWHEEL Scheduling on dense instances [Jacobs and Longo, 2014] and provide that a simple greedy algorithm (see Algorithm 1) achieves an approximation ratio of (1− 1/e−O(1/T )) where T is total time period. In the bandit setting, they provide lower and upper regret bounds that depend on the number of arms, mean rewards, and log(T ). A very recent work [Basu et al., 2020] extends the stochastic blocking bandit to a contextual setting where a context is sampled according to a distribution each time period and the reward per arm is drawn from a distribution with the mean depending on the pulled arm and the given context. Similar to the work of Basu et al. [2019], Basu et al. [2020] derive an online algorithm with an approximation ratio that depends on the maximum blocking durations and provide upper and lower α-regret bounds of O(log T ) and Ω(log T ), respectively. However, the results from this models cannot be directly applied to the adversarial setting due to the differences between the stochastic and adversarial reward generation schemes. Budgeted and Knapsack Bandits. Since the underlying offline optimisation problem of our setting, MAXREWARD, can also be casted as an instance of the multiple-choice multidimensional knapsack problem, it is also worth mentioning the line of work in the bandit literature that solve online knapsack problems with bandit feedback. In these models, the pull of an arm requires the consumption of resources in d ≥ 1 dimensions. The resource per arm is given either stochastic or adversarially in each time period and a (non replenishable) total budget B = (B1, ..., Bd) is available at the initial time period. The one-dimensional stochastic version of this setting is first studied in Tran-Thanh et al. [2010, 2012], Ding et al. [2013] under the name budgeted bandits, and is later extended to multiple dimensions (a.k.a. bandits with knapsack) by Badanidiyuru et al. [2013], Agrawal and Devanur [2014], Badanidiyuru et al. [2014]. More recently, Rangi et al. [2019] and Immorlica et al. [2019] initiate the study of adversarial knapsack bandits. Rangi et al. [2019] consider the d = 1 setting with a regret benchmark that is measured based on the best fixed-arm’s reward to cost ratio. Under such a regret benchmark, they show that sub-linear regret (with respect to B and k) is possible in both the stochastic and adversarial settings. Immorlica et al. [2019] consider the d ≥ 1 setting with a regret benchmark that is defined to be the ratio of the expected reward of the best fixed distribution over arms and the policy’s expected reward. show that the ratio is at least Ω(log T ). However, none of the techniques developed in these work can be applied to our setting, due to the following reason: The results in the knapsack bandit models typically assume that the pulling costs are bounded above by a constant, and the budget is significantly larger than this constant to allow sufficient exploration. In contrast, when MAXREWARD is conversed into a knapsack model, many of its dimensions will have a budget of 1, and the corresponding pulling cost for that dimension is also 1 (due to the blocking condition). Other Settings with Arm Availability Constraints. Other bandit models with arm availability cosntrainsts include the mortal bandits [Chakrabarti et al., 2009], sleeping bandits [Kleinberg et al., 2010, Kale et al., 2016], bandits with stochastic action sets [Neu and Valko, 2014], and combinatorial semi-bandits [Neu and Bartók, 2016]. We refer readers to [Basu et al., 2019] for a discussion of these models, including the relevance of the blocking bandit setting to online Markov decision processes. Connection to the scheduling literature. Notice that there is a strong connection between MAXREWARD and the interval scheduling problems. In particular, the MAXREWARD problem belongs to the class of fixed interval scheduling problems with arbitrary weight values, no preemption, and machine dependent processing time (see e.g., Kolen et al. [2007] for a comprehensive survey). This is one of the most general, and thus, hardest versions of the fixed interval scheduling literature (see, e.g., Kovalyov et al. [2007] for more details). In particular, MAXREWARD is a special case of this setting where for each task, the starting point of the feasible processing interval is equal to the arrival time. Note that to date, provable performance guarantees for fixed interval scheduling problems with arbitrary weight values only exist in offline, online but preemptive, or settings with some special uniformity assumptions (e.g., [Erlebach and Spieksma, 2000, Miyazawa and Erlebach, 2004, Bender et al., 2017, Yu and Jacobson, 2020]). Therefore, to our best knowledge, Theorem 2 in our paper is the first result which provides provable approximation ratio for a deterministic algorithm in an online non-preemptive setting. Note that with some modifications, our proof can also be extended to the general online non-preemptive setting, i.e., online interval scheduling with arbitrary weight values, no preemption, and machine dependent processing time. 2 Preliminaries Adversarial blocking bandit. In this paper we consider the following bandit setting. Let K = {1, . . . ,K} be the set of K arms. Let T = {1, . . . , T} denote a sequence of T time steps, or decision points faced by a decision maker. At every time step t ∈ T , the decision maker may pull one of the K arms. When pulling an arm k ∈ K at time step t ∈ T , the reward Xkt ∈ [0, 1] is obtained. In addition, the pulled arm k is deterministically blocked and cannot be pulled for the next (Dkt − 1) time steps for some integer blocking duration Dkt ∈ Z+. We also use the notation ∅ to denote the action of not pulling an arm. In which case, X∅t = 0 and D ∅ t = 1 for each time step t. We denote by Xk the sequence of rewards over T time steps associated with an arm k ∈ K such that Xk = {Xkt }Tt=1. In addition, we denote by X the sequence of vectors of all K rewards such that X = {Xk}Kk=1. Similarly, we denote by Dk = {Dkt }Tt=1 the sequence of blocking durations over T time steps associated with an arm k and denote by D = {Dk}Kk=1 the sequence of vectors of all K blocking duration vectors. In our model, the rewards and blocking durations of each arm can change an arbitrary number of times. We let D̃ ( ˜ D) be the maximal blocking duration (minimal blocking duration) which is the upper bound (lower bound) of the largest (smallest) possible blocking duration. We denote by D = {1, . . . , D̃}K×T the set of all blocking duration vector sequences which are upper bounded by D̃. Note that D is defined with respect to minimal blocking duration ˜ D = 1. It is sometime be useful to define D for arbitrarily lower bound ˜ D. Bounded path variation. Motivated by and adapted from a recent line of work in the bandit literature (e.g., [Besbes et al., 2014]), we assume that there is a path variation budget on the sequence of the rewards. In particular, the definition of path variation on the sequence of the rewards is defined to be T−1∑ t=1 K∑ k=1 ∣∣Xkt+1 −Xkt ∣∣ . We refer toBT as the path variation budget over T . We define the corresponding temporal uncertainty set as the set of reward vector sequences which satisfy the variation budget over the set of time steps {1, . . . , T}: B = { X ∈ [0, 1]K×T : T−1∑ t=1 K∑ k=1 ∣∣Xkt −Xkt+1∣∣ ≤ BT } Note that by setting BT = KT we can recover the standard unbounded version of our bandit model (as all the rewards are from [0, 1]). Note that our analysis also works for other variation budgets such as the maximum variation [Besbes et al., 2014] or the number of changes budgets [Auer et al., 2019]. See Section 5 for a more detailed discussion. Arm pulling policy. Let U be a random variable defined over a probability space (U,U ,Pu) Let π1 : U→ K and πt : [0, 1]t−1 × {1, . . . , D̃}t−1 × U→ K for t = 2, 3, . . . be measurable functions With some abuse of notation we denote by πt ∈ K the arm chosen at time t, that is given by πt = { π1(U) t = 1 πt(X π t−1, . . . , X π 1 , D π t−1, . . . , D π 1 , U) t = 2, 3, . . . Here Xπt (resp. D π t ) denotes the reward (resp. blocking duration) observed by the policy π at time t The mappings {πt : t = 1, . . . , T} together with the distribution Pu define the class of policies We define the class P of admissible policies to be those, at every time step, which choose an action which is not blocked. That is, P = { (π1, . . . , πT ) : πt /∈ {πj : j +D πj j − 1 ≥ t, ∀j ≤ t− 1}, ∀t ∈ {1, . . . , T}, X ∈ B, D ∈ D } . Algorithm 1: Greedy-BAA Input : T , K, {Xkt }k∈K,t∈T , {Dkt }k∈K,t∈T - An instance of the MAXREWARD Problem Output : π+ = (π+1 , π + 2 , ..., π + T ) ∈ P - A greedy solution to the MAXREWARD Problem 1 π+ = (∅, ..., ∅); 2 for j ← 1 to T do 3 Select π+j ∈ arg maxkj∈Aj(π+1 ,...,π+j−1)∪∅X kj j # See the preliminary section for definitions 4 end 5 return π+ In addition, let At(π1, . . . , πt−1) = K \ {πj : j + D πj j − 1 ≥ t, ∀j ≤ t − 1} denote the set of available arms at time step t (we will also use At for the sake of brevity). Objective. The cumulative reward of a policy π ∈ P is defined to be r(π) = ∑T t=1X π t where Xπt is the reward obtained by policy π at time step t. Our objective is to find π ∗ ∈ P such that π∗ ∈ arg maxπ∈P Eπ[r(π)], where the expectation is over all possible randomisation coming from policy π. Feedback. The difficulty of the optimisation problem depends on the information (or the feedback) we have about the rewards and blocking durations of the arms. In this paper, we consider three feedback models in increasing order of difficulty. In the simplest setting, we know the value of all Xkt and Dkt in advance. We refer to this setting as the (offline) MAXREWARD optimization problem. In the online version of MAXREWARD, we assume that Xkt and D k t are not known in advance, but at each time step t, the value of Xkt and D k t for all k at that particular time step t is revealed before we choose any arm to pull. Finally, in the (classical) bandit setting, we assume that only the reward and blocking duration of the chosen arms are revealed after that arm is pulled2. We will refer to third model as the adversarial blocking bandit problem. 3 The Offline and Online MAXREWARD Problems We start with the analysis of the offline and online MAXREWARD problems. As a slight preview of the next subsections, computing an optimal solution of the offline MAXREWARD problem is strongly NP-hard even with bounded variation budget. Such result eliminates the possibility of a fully polynomial-time approximation scheme (FPTAS) for the problem unless P = NP. In addition, for the online MAXREWARD problem, we design an online greedy algorithm with provable approximation guarantee. 3.1 The Computational Complexity of the Offline MAXREWARD Problem To show that the MAXREWARD problem is strongly NP-hard, we reduce from the Boolean satisfiability problem with three literals per clause (3-SAT), which is known to be strongly NP-complete [Garey and Johnson, 1979]. In a 3-SAT instance, we are given m variables and n clauses. Each clause consists of three literals, and each literal is either a variable or the negation of the variable. The problem is to determine if there is a boolean true/false assignment to each variable so that the given 3-SAT instance is true (i.e., each clause contains at least one true literal). Theorem 1. Computing an optimal solution for the MAXREWARD problem is strongly NP-hard. The hardness result holds even when the path variation is bounded. 3.2 Online MAXREWARD Problem with Bounded Variation Budget In this section, we consider the online version of MAXREWARD. We devise a simple online greedy algorithm, Greedy Best Available Arm (Greedy-BAA), in which, at each time step, the algorithm plays an available arm with the highest reward. Algorithm 1 provides a detail description of Greedy-BAA. 2In this paper, due to space limits, we do not deal with the full information feedback model, in which the reward and blocking duration values of all the arms are revealed at each time step after the pull. Below, we show that Greedy-BAA provides an approximation guarantee to the offline MAXREWARD problem that depends on the blocking durations and the variation budget. Theorem 2. Let k∗ = arg maxk Dkmax Dkmin denote the arm with the highest max-min blocking duration ratio. Let π+ denote the solution returned by Greedy-BAA, and π∗ denote an optimal solution of the offline MAXREWARD problem, respectively. We state that:( 1 + Dk ∗ max Dk ∗ min ) r(π+) + Dk ∗ max Dk ∗ min BT ≥ r(π∗), That is, Greedy-BAA has an approximation ratio of ( 1 + Dk ∗ max Dk ∗ min )−1 ( 1− D k∗ maxBT Dk ∗ minr(π ∗) ) . Note that as Dk ∗ min ≥ ˜ D and Dk ∗ max ≤ D̃, the approximation ratio above can be further bounded above by ( 1 + D̃ ˜ D )−1 ( 1− D̃BT ˜ Dr(π∗) ) . Comparison to the result of Basu et al. [2019]. We note that Basu et al. [2019] has studied the MAXREWARD problem with path variation budget BT = 0 (i.e., the reward values are fixed over time) and homogeneous blocking durations per arm (i.e., when the blocking duration per arm do not change over time). In that case, our proof provides an approximation ratio of 1/2 whereas Basu et al. [2019] provides an approximation ratio of O(1− 1/e−O(1/T )). Their technique uses a much complicated LP-bounding technique/proof that does not directly generalize to the case of BT > 0 with varying blocking durations. On the other hand, our approximation ratio result holds for the general case. For example, if BT grows slower than r(π+) with T , our algorithm guarantees an approximation ratio of (1 + 2 D̃ ˜ D ) −1. 4 The Adversarial Blocking Bandit Problem Given the investigation of the (offline and online) MAXREWARD problems in the previous section, we now turn to the main focus of our paper, namely the online MAXREWARD problem with bandit feedback, a.k.a the adversarial blocking bandit problem. While the regret analyses are typically done by benchmarking against the best fixed policy in hindsight, we can easily show that in our setting, this benchmark would perform arbitrarily poorly, compared to the offline optimal solution. Therefore, instead of following the standard regret analysis, we are interested in comparing the performance of the designed algorithms to that of the offline optimal solution. Therefore, we will use the following regret definition: Dynamic approximate regret. We compare the performance of a policy with respect to the dynamic oracle algorithm that returns the offline optimal solution of MAXREWARD .We define the α-regret under a policy π ∈ P as the worst case difference between an (offline) α-optimal sequence of actions and the expected performance under policy π. More precisely, let π∗ denote the arm pulling policy of that dynamic oracle algorithm. The α-regret of a policy π ∈ P against π∗ is defined to be Rαπ(BT , D̃, T ) = αr(π∗)− E[r(π)] where the expectation is over all the possible randomisation of π. Note that this regret notion is stronger than the regret against the best fixed policy in hindsight, as it is easy to show that the best fixed policy can perform arbitrarily badly, compared to π∗. 4.1 Blocking Bandit with Known Path Variation Budget We now turn to describe our new bandit algorithm, RGA, designed for the adversarial blocking bandit problem. This algorithm can be described as follows: 1. We split the time horizon T into batches T1, . . . , Tm of size ∆T each (except possibly the last batch): Tj = {t ∈ {1, . . . ,∆T } : (j − 1)∆T + t ≤ min {j∆T , T}} , for all j = 1, . . . ,m where m = ⌈ T ∆T ⌉ is the number of batches. Algorithm 2: Repeating Greedy Algorithm (RGA) Input: ∆T . 1 while 1 ≤ j ≤ ⌈ T ∆T ⌉ do 2 Set τ = 1 3 while τ ≤ ∆T do 4 if (1 ≤ τ ≤ K) then 5 Pull arm k = τ mod K + 1 6 Receive reward and blocking duration (Xkτ , D k τ ) 7 Set X̂kt = X k τ for all t ∈ [1,∆T ]. 8 if (K + 1 ≤ τ ≤ D̃ +K) then 9 Pull no arms 10 if (D̃ +K + 1 ≤ τ ≤ ∆T − D̃) then 11 Pick arms according to GREEDY-BAA(∆T − 2D̃ −K,K, X̂1, . . . , X̂K , D1, . . . , DK) 12 if (∆T − D̃ + 1 ≤ τ ≤ ∆T ) then 13 Pull no arms 14 τ ← τ + 1 15 j ← j + 1 2. Within each batch we spend the first K rounds pulling each arm. Without loss of generality, we shall assume that arm k is pulled on round k. After this we spend the next D̃ rounds pulling no arms. This ensures that all arms will be available when we next pull an arm. 3. Then, up until the final D̃ rounds we play Greedy-BAA using the rewards observed in the first K rounds as the fixed rewards for each arm. 4. In the final D̃ rounds of each batch, we again pull no arms. This ensures that all of the arms are available at the beginning of the next batch. Theorem 3. Suppose that the variation budgetBT is known in advance and maximal duration D̃ ≥ 1 such that D̃BT ∈ o(T ). The α-regret of RGA, where α = ˜DD̃+ ˜ D , is at most O (√ T (2D̃ +K)BT ) when the parameter when ∆T is set to ⌈√ (T+1)(2D̃+K) 2BT ⌉ . Note that this bound is sub-linear in T if D̃BT = o(T ) (e.g., D̃ is bounded above by a constant and BT ∈ o(T )). It is also worth noting that while α = 11+D̃ might imply that RGA can perform better than the worst-case performance of Greedy-BAA, with BT ∈ o(T ) it is not the case (see Section ?? in the appendix for more details). 4.2 Blocking Bandit with Unknown Path Variation Budget Note that RGA requires knowledge of BT in order to properly set ∆T . To resolve this issue we propose META-RGA, a meta-bandit algorithm, where each arm corresponds to an instance of the RGA algorithm whose ∆T parameter tuned for a different variation budget. The time horizon T is broken into meta-blocks of length H . At the start of each meta-block an arm (i.e., an instance of RGA with its corresponding budget) is selected according to the well known Exp3 algorithm [Auer et al., 2002]. The RGA is then played for the next H time steps with optimally tuned restarts (see Theorem 3 for more details). At the end of the meta-block, the Exp3 observes a reward corresponding to the total reward accumulated by the chosen RGA in this meta-block. The intuition of this idea is that the meta-bandit will learn which budget will be the best upper bound for RGA. In what follows, we shall denote the set of arms available to the Exp3 algorithm by J , and denote the corresponding set of variation budgets by JB . The META-RGA algorithm uses dlog2(KT )e+ 1 meta-arms with budgets JB = {20, 21, . . . , 2dlog2(KT )e}. That is, the budget values are powers of 2 up to the smallest 2-power, which is still larger than KT , which is the ultimate upper bound of the Algorithm 3: Meta Repeating Greedy Algorithm (META-RGA) Input: T,K, γ ∈ (0, 1], batch length H . 1 Initialize: |J | = dlog2(KT )e+ 1, JB = {20, 21, . . . , 2dlog2(KT )e}, wi(1) = 1 for i = 1, . . . , |J |. 2 for τ = 1, . . . , ⌈ T H ⌉ do 3 Set pi(τ) = (1− γ) wi(τ)∑|J | j=1 wj(τ) + γ |J | i = 1, . . . , |J | 4 Draw iτ randomly according to the probabilities p1(τ), . . . , p|J |(τ) 5 Run RGA in batch τ with budget JB [iτ ] = 2iτ−1 and optimally tuned restarts 6 Receive reward xit(τ) ∈ [0, H] at the end of the batch 7 for j = 1, . . . , |J | do 8 x̂j(τ) = { xj(τ) pj(τ) if j = iτ 0 otherwise wj(τ + 1) = wj(τ)exp(γx̂j(τ)/(H|J |)) path variation budget (as BT ≤ KT ). In addition, let Bi denote the total path variance within batch i, and B̃ = maxiBi. We state the following: Theorem 4. Suppose that the variation budget BT is unknown in advance to us. In addition, suppose that the maximal blocking duration D̃ ≥ 1 such that D̃BT ∈ o(T ). The α-regret of RGA-META, where α = 1 1+D̃ , is at most O ( B̃1/2T 3/4(2D̃ +K)1/4 ln(KT )1/4 ln(ln(KT ))1/4 ) when the parameters of RGA-META are set as follows: H = √ T (2D̃ +K) ln(KT ) ln(ln(KT )) , γ = min { 1, √ ln(KT ) ln(ln(KT )) (e− 1)T } . Note that since B̃ ≤ HK by definition (the maximum path variance within a batch is at most HK), by setting H = √ T (2D̃+K) ln(KT ) ln(ln(KT )) we always get sub-linear regret in T if D̃ ∈ O(1) (i.e., is bounded above by a constant). Otherwise we need to have B̃2D̃ ∈ o(T ). Furthermore, when B̃ is small, our regret bound tends to O(T 3/4). Thus, it is still an open question whether we can get a tighter upper bound (e.g., O( √ T )) for this case (i.e., when the variation budget is unknown). 5 Discussions In this section we will provide some intuitions why we set BT and D̃ to be small in the previous sections. In particular, we show that if either the variation budget or the maximum blocking duration is large, the lower bound of the α-regret is Θ(T ). We also discuss a potential lower bound for the α-regret of the adversarial blocking bandit problem in the case of BT ∈ o(KT ) and D̃ ∈ O(1). Finally, we will also discuss how our results change if we use other types of variation budgets. Large variation budget. Consider the case when BT ∈ Θ(T ). Theorem 3 indicates that the upper bound of the α-regret is Θ(T ) where α = 1 1+D̃ as defined in Theorem 3. Indeed, we show that this is the best possible we can achieve: Claim 1. For any T > 0 and BT ∈ Θ(KT ), there exists a sequence of rewards and blocking durations X and D such thatRαπ(BT , D̃, T ) = Θ(T ) for that particular (X,D). Large blocking durations. If D̃ ∈ Θ(T ) and α is the approximation ratio of Greedy-BAA: Claim 2. For any T > 0 and D̃ ∈ Θ(T ), there exists a sequence of rewards and blocking durations X and D such thatRαπ(BT , D̃, T ) = Θ(T ) for that particular (X,D). Note that our regret bounds only make sense if D̃BT ∈ o(T ). Thus, it is still an open question whether we can achieve sub-linear α-regret bounds in T if both BT , D̃ ∈ o(T ) but D̃BT ∈ Ω(T ). Almost matching regret lower bound for RGA. Consider the case when D̃ = O(1). This implies that the α-regret bound of RGA is reduced to O( √ KTBT ). This in fact matches the known lower bounds of the 1-regret for the case of D̃ = 1 (i.e., no blocking) from the literature [Auer et al., 2019]. In particular, with D̃ = 1, the Greedy-BAA algorithm becomes optimal (see, e.g., Section 4.3 of Basu et al. [2019] for the discussion of this), and thus, the α-regret notion becomes 1-regret. Therefore, if there exists an algorithm which could achive an α-regret better thanO( √ KTBT ) in our setting, then it would be able to achieve O( √ KTBT ) 1-regret for the standard (i.e., non-blocking) adversarial bandit as well. It is also worth noting that when D̃ is not bounded above by a constant, or the variation budget BT is not known in advance, it is still not known what the regret lower bound would be. Other variation budget definitions. There are a number of different variation budget definitions in the literature [Besbes et al., 2014, Wei and Luo, 2018, Auer et al., 2019]. It is worth noting that our analysis works in a similar way for the maximum variation budget BmaxT and number of changes budget LT , which can be defined as follows: BmaxT = ∑ t,t+1∈T max k∈K |Xkt+1 −Xkt |, LT = #{t : 1 ≤ t ≤ T − 1,∃k : Xkt 6= Xkt+1} If we use these variation budgets instead, the regret in Theorem 3 will be modified to O (√ (2D̃ +K)TBmaxT ) and O (√ (2D̃ +K)TLT ) , respectively. Furthermore, the ap- proximation ratio of Greedy-BAA will also change. In particular, it becomes [ ( 1 + D̃ ) + D̃KBmaxT /r(π +) ]−1 and [ ( 1 + D̃ ) + D̃KLT /r(π +) ]−1 . We refer the reader to Section ?? in the appendix for a more detailed discussion. It remains as future work to derive regret bounds for the other variation budgets. Broader Impact The paper examines a novel multi-armed bandit problem in which the decision-making agent aims to receive as many (cumulative) rewards as possible over a finite period subject to constraints. Our focus and results are largely theoretical. In particular, our contributions advance our understanding of multi-armed bandit models and its theoretical limitations and benefit the general (theoretical) machine learning community, specifically the multi-armed bandit and online learning communities. In addition, we do not expect that our theoretical findings can be directly used in more applied domains. Acknowledgments and Disclosure of Funding Nicholas Bishop was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Partnership grant. Debmalya Mandal was supported by a Columbia Data Science Institute Post-Doctoral Fellowship.
1. What is the focus and contribution of the paper regarding the adversarial blocking bandits problem? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis? 3. Do you have any concerns or weaknesses regarding the paper, especially regarding its experimental analysis? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper addresses the problem of adversarial blocking bandits when the rewards and unavailability of the arms are not generated by a stochastic process (they are fixed before the decision maker begins playing). This problem is relevant in practice as many online resource allocation settings fit into this mold. The authors further endow the setting with a so called "path variation budget" - a cumulative limit on how much the reward of any arm can vary over time. In this setting 3 main results are presented: 1) Showing the problem of computing the optimal sequence of arms is strongly NP-hard - meaning it is intractable, in practice, to compute the optimal sequence of plays even with full information on the sequence of rewards and delays of all arms. 2) A proxy optimal policy is proposed for this setting Greedy-BAA which plays greedily but has with full knowledge of the rewards and delays at each individual time-step, but no further. For this policy, the authors are able to recover and approximation ration relative to the best possible policy. While weaker than the state-of-the-art results for settings where the reward do not vary over time, this approximation ration manages to be more general and account for the proposed path variation budget. 3) Finally, the authors propose the RGA and the Meta-RGA algorithm, the latter not requiring knowledge of the variation budget and provide bounds on their (\alpha-)regret. Strengths I think the results presented in this paper are sound and the problem is interesting. The contribution seems novel and provides new insights into the blocking bandit problem when blocking durations vary over time and rewards exhibit bounded cumulative change over the entire length of the game. The significance (and relevance in the practical world) of the problem is well justified in the introduction. I am pleased to find the numerical experiment in the appendix but I would have liked to see a more comprehensive numerical analysis of the algorithm. I have not checked the correctness of the proofs in the appendix. Weaknesses I think the practical relevance of the paper would have been more evident if there was a more comprehensive numerical analysis with more problem settings being studied. More discussion would have also been useful, particularly over the relationship between the problem settings in the experiment relative to the space of parameters (which ones are harder and why? what characteristic or mechanism of the algorithm is highlighted in each etc.). A "path variation budget" of 3 seems very low. Edit after author feedback: I have read and am satisfied with the authors' reply. I will maintain my score and vote for acceptance.
NIPS
Title Adversarial Blocking Bandits Abstract We consider a general adversarial multi-armed blocking bandit setting where each played arm can be blocked (unavailable) for some time periods and the reward per arm is given at each time period adversarially without obeying any distribution. The setting models scenarios of allocating scarce limited supplies (e.g., arms) where the supplies replenish and can be reused only after certain time periods. We first show that, in the optimization setting, when the blocking durations and rewards are known in advance, finding an optimal policy (e.g., determining which arm per round) that maximises the cumulative reward is strongly NP-hard, eliminating the possibility of a fully polynomial-time approximation scheme (FPTAS) for the problem unless P = NP. To complement our result, we show that a greedy algorithm that plays the best available arm at each round provides an approximation guarantee that depends on the blocking durations and the path variance of the rewards. In the bandit setting, when the blocking durations and rewards are not known, we design two algorithms, RGA and RGA-META, for the case of bounded duration an path variation. In particular, when the variation budget BT is known in advance, RGA can achieve O( √ T (2D̃ +K)BT ) dynamic approximate regret. On the other hand, when BT is not known, we show that the dynamic approximate regret of RGA-META is at most O((K + D̃)B̃T ) where B̃ is the maximal path variation budget within each batch of RGA-META (which is provably in order of o( √ T ). We also prove that if either the variation budget or the maximal blocking duration is unbounded, the approximate regret will be at least Θ(T ). We also show that the regret upper bound of RGA is tight if the blocking durations are bounded above by an order of O(1). N/A RGA can achieve O( √ T (2D̃ +K)BT ) dynamic approximate regret. On the other hand, when BT is not known, we show that the dynamic approximate regret of RGA-META is at most O((K + D̃)1/4B̃1/2T 3/4) where B̃ is the maximal path variation budget within each batch of RGA-META (which is provably in order of o( √ T ). We also prove that if either the variation budget or the maximal blocking duration is unbounded, the approximate regret will be at least Θ(T ). We also show that the regret upper bound of RGA is tight if the blocking durations are bounded above by an order of O(1). 1 Introduction This paper investigates the blocking bandit model where pulling an arm results in having that arm blocked for a deterministic number of rounds. For example, consider the classical problem of online task allocation, in which new task requests arrive at each time step, waiting to be assigned to one of many servers [Karthik et al., 2017]. Once a server is allocated to a task, it starts working on it, and becomes unavailable for future tasks until that task is done. If there are no servers available or none is allocated to the task at its arrival, the request will not be served and leave the system forever. A more recent example comes from the domain of expert crowdsourcing (e.g., Upwork, Outsourcely, etc.). In this setting, a job requester can sequentially choose from a pool of workers and allocate a short-term job/project to the worker [Ho and Vaughan, 2012, Tran-Thanh et al., 2014]. The stochastic version of this problem, where the rewards are randomly drawn from a distribution in an 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. i.i.d. manner, with the constraint that the blocking durations are fixed per arm over time, has been studied in [Basu et al., 2019] and [Basu et al., 2020]. However, in many applications, the stochastic setting is too restrictive and not realistic. For example, in the online task allocation problem, the tasks can be heterogeneous, and both the value and the serving time of the tasks can vary over time in an arbitrary manner. Furthermore, in the expert crowdsourcing setting, the time and quality workers need to deliver the job are unknown in advance, can vary over time, and do not necessarily follow an i.i.d. stochastic process. These examples demonstrate that for many real-world situations, the stochastic blocking bandit model is not an appropriate choice. To overcome this issue, in this paper we propose the adversarial blocking bandit setting, where both the sequence of rewards and blocking durations per arm can be arbitrary. While the literature of adversarial bandits is enormous, to the best of our knowledge, this is the first attempt to address the effect of blocking in adversarial models. In particular, we are interested in a setting where the rewards are neither sampled i.i.d., nor maliciously chosen in an arbitrary way. Instead, in many real-world systems, the change in the value of rewards is rather slow or smooth over time (e.g., in the online task allocation problem, similar tasks usually arrive in batch, or in the crowdsourcing system, workers may have periods when they perform consistently, and thus, their performance slowly varies over time). To capture this, we assume that there is a path variation budget which controls the change of the rewards over time 1. 1.1 Main Contributions In this paper, apart from the adversarial blocking bandit setting, we also investigate two additional versions of the model: (i) The offline MAXREWARD problem, where all the rewards and blocking durations are known in advance; and (ii) the online version of MAXREWARD, in which we see the corresponding rewards and blocking durations of the arms at each time step before we choose an arm to pull. Our main findings can be summarised as follows: 1. We prove that the offline MAXREWARD problem is strongly NP-hard (Theorem 1). Note that this result is stronger than the computational hardness result in Basu et al. [2019], which depends on the correctness of the randomised exponential time hypothesis. 2. We devise a provable approximation ratio for a simple online greedy algorithm, Greedy-BAA, for the online MAXREWARD problem (Theorem 2). Our approximation ratio, when applied to the stochastic blocking bandit model with fixed blocking durations, is slightly weaker than that of Basu et al. [2019]. However, it is more generic, as it can be applied to any arbitrary sequence of rewards and blocking durations. 3. For the bandit setting, we consider the case when both the maximal blocking duration and the path variance are bounded, and propose two bandit algorithms: • We design RGA for the case of known path variation budget BT . In particular, we show that RGA can provably achieve O (√ T (2D̃ +K)BT ) regret, where T is the time horizon, K is the number of arms, D̃ is the maximum blocking duration, and the regret is computed against the performance of Greedy-BAA (Theorem 3). • For the case of unknown path variation budget BT , we propose RGA-META that uses Exp3 as a meta-bandit algorithm to learn an appropriate path variation budget and runs RGA with it. We prove that RGA-META achieves O((K + D̃)1/4B̃1/2T 3/4) regret bound where B̃ is the maximal path variance within a single batch of the algorithm, and is in order of O( √ T ) in the worst case (Theorem 4). 4. Finally, we also discuss a number of regret lower bound results. In particular, we show that if either BT or D̃ is in Θ(T ) (or unbounded), then the regret is at least Θ(T ) (Claims 1 and 2). We also discuss that if D̃ ∈ O(1), then there is a matching lower bound for the regret of RGA (Section 5). 1We will show in Section 5 that bounded variation budgets are necessary to achieve sub-linear regrets. 1.2 Related Work Stochastic Blocking Bandits. The most relevant work to our setting is the stochastic blocking bandit model. As mentioned before, Basu et al. [2019] introduce and study this model where the reward per each time period is generated from a stochastic distribution with mean µk reward for each arm k and the blocking duration is fixed across all time period for each arm k (e.g., Dkt = D k for all t and k). In the optimization setting where the mean rewards and blocking durations are known, they consider a simpler version of the MAXREWARD problem for their setting and show that the problem is as hard as the PINWHEEL Scheduling on dense instances [Jacobs and Longo, 2014] and provide that a simple greedy algorithm (see Algorithm 1) achieves an approximation ratio of (1− 1/e−O(1/T )) where T is total time period. In the bandit setting, they provide lower and upper regret bounds that depend on the number of arms, mean rewards, and log(T ). A very recent work [Basu et al., 2020] extends the stochastic blocking bandit to a contextual setting where a context is sampled according to a distribution each time period and the reward per arm is drawn from a distribution with the mean depending on the pulled arm and the given context. Similar to the work of Basu et al. [2019], Basu et al. [2020] derive an online algorithm with an approximation ratio that depends on the maximum blocking durations and provide upper and lower α-regret bounds of O(log T ) and Ω(log T ), respectively. However, the results from this models cannot be directly applied to the adversarial setting due to the differences between the stochastic and adversarial reward generation schemes. Budgeted and Knapsack Bandits. Since the underlying offline optimisation problem of our setting, MAXREWARD, can also be casted as an instance of the multiple-choice multidimensional knapsack problem, it is also worth mentioning the line of work in the bandit literature that solve online knapsack problems with bandit feedback. In these models, the pull of an arm requires the consumption of resources in d ≥ 1 dimensions. The resource per arm is given either stochastic or adversarially in each time period and a (non replenishable) total budget B = (B1, ..., Bd) is available at the initial time period. The one-dimensional stochastic version of this setting is first studied in Tran-Thanh et al. [2010, 2012], Ding et al. [2013] under the name budgeted bandits, and is later extended to multiple dimensions (a.k.a. bandits with knapsack) by Badanidiyuru et al. [2013], Agrawal and Devanur [2014], Badanidiyuru et al. [2014]. More recently, Rangi et al. [2019] and Immorlica et al. [2019] initiate the study of adversarial knapsack bandits. Rangi et al. [2019] consider the d = 1 setting with a regret benchmark that is measured based on the best fixed-arm’s reward to cost ratio. Under such a regret benchmark, they show that sub-linear regret (with respect to B and k) is possible in both the stochastic and adversarial settings. Immorlica et al. [2019] consider the d ≥ 1 setting with a regret benchmark that is defined to be the ratio of the expected reward of the best fixed distribution over arms and the policy’s expected reward. show that the ratio is at least Ω(log T ). However, none of the techniques developed in these work can be applied to our setting, due to the following reason: The results in the knapsack bandit models typically assume that the pulling costs are bounded above by a constant, and the budget is significantly larger than this constant to allow sufficient exploration. In contrast, when MAXREWARD is conversed into a knapsack model, many of its dimensions will have a budget of 1, and the corresponding pulling cost for that dimension is also 1 (due to the blocking condition). Other Settings with Arm Availability Constraints. Other bandit models with arm availability cosntrainsts include the mortal bandits [Chakrabarti et al., 2009], sleeping bandits [Kleinberg et al., 2010, Kale et al., 2016], bandits with stochastic action sets [Neu and Valko, 2014], and combinatorial semi-bandits [Neu and Bartók, 2016]. We refer readers to [Basu et al., 2019] for a discussion of these models, including the relevance of the blocking bandit setting to online Markov decision processes. Connection to the scheduling literature. Notice that there is a strong connection between MAXREWARD and the interval scheduling problems. In particular, the MAXREWARD problem belongs to the class of fixed interval scheduling problems with arbitrary weight values, no preemption, and machine dependent processing time (see e.g., Kolen et al. [2007] for a comprehensive survey). This is one of the most general, and thus, hardest versions of the fixed interval scheduling literature (see, e.g., Kovalyov et al. [2007] for more details). In particular, MAXREWARD is a special case of this setting where for each task, the starting point of the feasible processing interval is equal to the arrival time. Note that to date, provable performance guarantees for fixed interval scheduling problems with arbitrary weight values only exist in offline, online but preemptive, or settings with some special uniformity assumptions (e.g., [Erlebach and Spieksma, 2000, Miyazawa and Erlebach, 2004, Bender et al., 2017, Yu and Jacobson, 2020]). Therefore, to our best knowledge, Theorem 2 in our paper is the first result which provides provable approximation ratio for a deterministic algorithm in an online non-preemptive setting. Note that with some modifications, our proof can also be extended to the general online non-preemptive setting, i.e., online interval scheduling with arbitrary weight values, no preemption, and machine dependent processing time. 2 Preliminaries Adversarial blocking bandit. In this paper we consider the following bandit setting. Let K = {1, . . . ,K} be the set of K arms. Let T = {1, . . . , T} denote a sequence of T time steps, or decision points faced by a decision maker. At every time step t ∈ T , the decision maker may pull one of the K arms. When pulling an arm k ∈ K at time step t ∈ T , the reward Xkt ∈ [0, 1] is obtained. In addition, the pulled arm k is deterministically blocked and cannot be pulled for the next (Dkt − 1) time steps for some integer blocking duration Dkt ∈ Z+. We also use the notation ∅ to denote the action of not pulling an arm. In which case, X∅t = 0 and D ∅ t = 1 for each time step t. We denote by Xk the sequence of rewards over T time steps associated with an arm k ∈ K such that Xk = {Xkt }Tt=1. In addition, we denote by X the sequence of vectors of all K rewards such that X = {Xk}Kk=1. Similarly, we denote by Dk = {Dkt }Tt=1 the sequence of blocking durations over T time steps associated with an arm k and denote by D = {Dk}Kk=1 the sequence of vectors of all K blocking duration vectors. In our model, the rewards and blocking durations of each arm can change an arbitrary number of times. We let D̃ ( ˜ D) be the maximal blocking duration (minimal blocking duration) which is the upper bound (lower bound) of the largest (smallest) possible blocking duration. We denote by D = {1, . . . , D̃}K×T the set of all blocking duration vector sequences which are upper bounded by D̃. Note that D is defined with respect to minimal blocking duration ˜ D = 1. It is sometime be useful to define D for arbitrarily lower bound ˜ D. Bounded path variation. Motivated by and adapted from a recent line of work in the bandit literature (e.g., [Besbes et al., 2014]), we assume that there is a path variation budget on the sequence of the rewards. In particular, the definition of path variation on the sequence of the rewards is defined to be T−1∑ t=1 K∑ k=1 ∣∣Xkt+1 −Xkt ∣∣ . We refer toBT as the path variation budget over T . We define the corresponding temporal uncertainty set as the set of reward vector sequences which satisfy the variation budget over the set of time steps {1, . . . , T}: B = { X ∈ [0, 1]K×T : T−1∑ t=1 K∑ k=1 ∣∣Xkt −Xkt+1∣∣ ≤ BT } Note that by setting BT = KT we can recover the standard unbounded version of our bandit model (as all the rewards are from [0, 1]). Note that our analysis also works for other variation budgets such as the maximum variation [Besbes et al., 2014] or the number of changes budgets [Auer et al., 2019]. See Section 5 for a more detailed discussion. Arm pulling policy. Let U be a random variable defined over a probability space (U,U ,Pu) Let π1 : U→ K and πt : [0, 1]t−1 × {1, . . . , D̃}t−1 × U→ K for t = 2, 3, . . . be measurable functions With some abuse of notation we denote by πt ∈ K the arm chosen at time t, that is given by πt = { π1(U) t = 1 πt(X π t−1, . . . , X π 1 , D π t−1, . . . , D π 1 , U) t = 2, 3, . . . Here Xπt (resp. D π t ) denotes the reward (resp. blocking duration) observed by the policy π at time t The mappings {πt : t = 1, . . . , T} together with the distribution Pu define the class of policies We define the class P of admissible policies to be those, at every time step, which choose an action which is not blocked. That is, P = { (π1, . . . , πT ) : πt /∈ {πj : j +D πj j − 1 ≥ t, ∀j ≤ t− 1}, ∀t ∈ {1, . . . , T}, X ∈ B, D ∈ D } . Algorithm 1: Greedy-BAA Input : T , K, {Xkt }k∈K,t∈T , {Dkt }k∈K,t∈T - An instance of the MAXREWARD Problem Output : π+ = (π+1 , π + 2 , ..., π + T ) ∈ P - A greedy solution to the MAXREWARD Problem 1 π+ = (∅, ..., ∅); 2 for j ← 1 to T do 3 Select π+j ∈ arg maxkj∈Aj(π+1 ,...,π+j−1)∪∅X kj j # See the preliminary section for definitions 4 end 5 return π+ In addition, let At(π1, . . . , πt−1) = K \ {πj : j + D πj j − 1 ≥ t, ∀j ≤ t − 1} denote the set of available arms at time step t (we will also use At for the sake of brevity). Objective. The cumulative reward of a policy π ∈ P is defined to be r(π) = ∑T t=1X π t where Xπt is the reward obtained by policy π at time step t. Our objective is to find π ∗ ∈ P such that π∗ ∈ arg maxπ∈P Eπ[r(π)], where the expectation is over all possible randomisation coming from policy π. Feedback. The difficulty of the optimisation problem depends on the information (or the feedback) we have about the rewards and blocking durations of the arms. In this paper, we consider three feedback models in increasing order of difficulty. In the simplest setting, we know the value of all Xkt and Dkt in advance. We refer to this setting as the (offline) MAXREWARD optimization problem. In the online version of MAXREWARD, we assume that Xkt and D k t are not known in advance, but at each time step t, the value of Xkt and D k t for all k at that particular time step t is revealed before we choose any arm to pull. Finally, in the (classical) bandit setting, we assume that only the reward and blocking duration of the chosen arms are revealed after that arm is pulled2. We will refer to third model as the adversarial blocking bandit problem. 3 The Offline and Online MAXREWARD Problems We start with the analysis of the offline and online MAXREWARD problems. As a slight preview of the next subsections, computing an optimal solution of the offline MAXREWARD problem is strongly NP-hard even with bounded variation budget. Such result eliminates the possibility of a fully polynomial-time approximation scheme (FPTAS) for the problem unless P = NP. In addition, for the online MAXREWARD problem, we design an online greedy algorithm with provable approximation guarantee. 3.1 The Computational Complexity of the Offline MAXREWARD Problem To show that the MAXREWARD problem is strongly NP-hard, we reduce from the Boolean satisfiability problem with three literals per clause (3-SAT), which is known to be strongly NP-complete [Garey and Johnson, 1979]. In a 3-SAT instance, we are given m variables and n clauses. Each clause consists of three literals, and each literal is either a variable or the negation of the variable. The problem is to determine if there is a boolean true/false assignment to each variable so that the given 3-SAT instance is true (i.e., each clause contains at least one true literal). Theorem 1. Computing an optimal solution for the MAXREWARD problem is strongly NP-hard. The hardness result holds even when the path variation is bounded. 3.2 Online MAXREWARD Problem with Bounded Variation Budget In this section, we consider the online version of MAXREWARD. We devise a simple online greedy algorithm, Greedy Best Available Arm (Greedy-BAA), in which, at each time step, the algorithm plays an available arm with the highest reward. Algorithm 1 provides a detail description of Greedy-BAA. 2In this paper, due to space limits, we do not deal with the full information feedback model, in which the reward and blocking duration values of all the arms are revealed at each time step after the pull. Below, we show that Greedy-BAA provides an approximation guarantee to the offline MAXREWARD problem that depends on the blocking durations and the variation budget. Theorem 2. Let k∗ = arg maxk Dkmax Dkmin denote the arm with the highest max-min blocking duration ratio. Let π+ denote the solution returned by Greedy-BAA, and π∗ denote an optimal solution of the offline MAXREWARD problem, respectively. We state that:( 1 + Dk ∗ max Dk ∗ min ) r(π+) + Dk ∗ max Dk ∗ min BT ≥ r(π∗), That is, Greedy-BAA has an approximation ratio of ( 1 + Dk ∗ max Dk ∗ min )−1 ( 1− D k∗ maxBT Dk ∗ minr(π ∗) ) . Note that as Dk ∗ min ≥ ˜ D and Dk ∗ max ≤ D̃, the approximation ratio above can be further bounded above by ( 1 + D̃ ˜ D )−1 ( 1− D̃BT ˜ Dr(π∗) ) . Comparison to the result of Basu et al. [2019]. We note that Basu et al. [2019] has studied the MAXREWARD problem with path variation budget BT = 0 (i.e., the reward values are fixed over time) and homogeneous blocking durations per arm (i.e., when the blocking duration per arm do not change over time). In that case, our proof provides an approximation ratio of 1/2 whereas Basu et al. [2019] provides an approximation ratio of O(1− 1/e−O(1/T )). Their technique uses a much complicated LP-bounding technique/proof that does not directly generalize to the case of BT > 0 with varying blocking durations. On the other hand, our approximation ratio result holds for the general case. For example, if BT grows slower than r(π+) with T , our algorithm guarantees an approximation ratio of (1 + 2 D̃ ˜ D ) −1. 4 The Adversarial Blocking Bandit Problem Given the investigation of the (offline and online) MAXREWARD problems in the previous section, we now turn to the main focus of our paper, namely the online MAXREWARD problem with bandit feedback, a.k.a the adversarial blocking bandit problem. While the regret analyses are typically done by benchmarking against the best fixed policy in hindsight, we can easily show that in our setting, this benchmark would perform arbitrarily poorly, compared to the offline optimal solution. Therefore, instead of following the standard regret analysis, we are interested in comparing the performance of the designed algorithms to that of the offline optimal solution. Therefore, we will use the following regret definition: Dynamic approximate regret. We compare the performance of a policy with respect to the dynamic oracle algorithm that returns the offline optimal solution of MAXREWARD .We define the α-regret under a policy π ∈ P as the worst case difference between an (offline) α-optimal sequence of actions and the expected performance under policy π. More precisely, let π∗ denote the arm pulling policy of that dynamic oracle algorithm. The α-regret of a policy π ∈ P against π∗ is defined to be Rαπ(BT , D̃, T ) = αr(π∗)− E[r(π)] where the expectation is over all the possible randomisation of π. Note that this regret notion is stronger than the regret against the best fixed policy in hindsight, as it is easy to show that the best fixed policy can perform arbitrarily badly, compared to π∗. 4.1 Blocking Bandit with Known Path Variation Budget We now turn to describe our new bandit algorithm, RGA, designed for the adversarial blocking bandit problem. This algorithm can be described as follows: 1. We split the time horizon T into batches T1, . . . , Tm of size ∆T each (except possibly the last batch): Tj = {t ∈ {1, . . . ,∆T } : (j − 1)∆T + t ≤ min {j∆T , T}} , for all j = 1, . . . ,m where m = ⌈ T ∆T ⌉ is the number of batches. Algorithm 2: Repeating Greedy Algorithm (RGA) Input: ∆T . 1 while 1 ≤ j ≤ ⌈ T ∆T ⌉ do 2 Set τ = 1 3 while τ ≤ ∆T do 4 if (1 ≤ τ ≤ K) then 5 Pull arm k = τ mod K + 1 6 Receive reward and blocking duration (Xkτ , D k τ ) 7 Set X̂kt = X k τ for all t ∈ [1,∆T ]. 8 if (K + 1 ≤ τ ≤ D̃ +K) then 9 Pull no arms 10 if (D̃ +K + 1 ≤ τ ≤ ∆T − D̃) then 11 Pick arms according to GREEDY-BAA(∆T − 2D̃ −K,K, X̂1, . . . , X̂K , D1, . . . , DK) 12 if (∆T − D̃ + 1 ≤ τ ≤ ∆T ) then 13 Pull no arms 14 τ ← τ + 1 15 j ← j + 1 2. Within each batch we spend the first K rounds pulling each arm. Without loss of generality, we shall assume that arm k is pulled on round k. After this we spend the next D̃ rounds pulling no arms. This ensures that all arms will be available when we next pull an arm. 3. Then, up until the final D̃ rounds we play Greedy-BAA using the rewards observed in the first K rounds as the fixed rewards for each arm. 4. In the final D̃ rounds of each batch, we again pull no arms. This ensures that all of the arms are available at the beginning of the next batch. Theorem 3. Suppose that the variation budgetBT is known in advance and maximal duration D̃ ≥ 1 such that D̃BT ∈ o(T ). The α-regret of RGA, where α = ˜DD̃+ ˜ D , is at most O (√ T (2D̃ +K)BT ) when the parameter when ∆T is set to ⌈√ (T+1)(2D̃+K) 2BT ⌉ . Note that this bound is sub-linear in T if D̃BT = o(T ) (e.g., D̃ is bounded above by a constant and BT ∈ o(T )). It is also worth noting that while α = 11+D̃ might imply that RGA can perform better than the worst-case performance of Greedy-BAA, with BT ∈ o(T ) it is not the case (see Section ?? in the appendix for more details). 4.2 Blocking Bandit with Unknown Path Variation Budget Note that RGA requires knowledge of BT in order to properly set ∆T . To resolve this issue we propose META-RGA, a meta-bandit algorithm, where each arm corresponds to an instance of the RGA algorithm whose ∆T parameter tuned for a different variation budget. The time horizon T is broken into meta-blocks of length H . At the start of each meta-block an arm (i.e., an instance of RGA with its corresponding budget) is selected according to the well known Exp3 algorithm [Auer et al., 2002]. The RGA is then played for the next H time steps with optimally tuned restarts (see Theorem 3 for more details). At the end of the meta-block, the Exp3 observes a reward corresponding to the total reward accumulated by the chosen RGA in this meta-block. The intuition of this idea is that the meta-bandit will learn which budget will be the best upper bound for RGA. In what follows, we shall denote the set of arms available to the Exp3 algorithm by J , and denote the corresponding set of variation budgets by JB . The META-RGA algorithm uses dlog2(KT )e+ 1 meta-arms with budgets JB = {20, 21, . . . , 2dlog2(KT )e}. That is, the budget values are powers of 2 up to the smallest 2-power, which is still larger than KT , which is the ultimate upper bound of the Algorithm 3: Meta Repeating Greedy Algorithm (META-RGA) Input: T,K, γ ∈ (0, 1], batch length H . 1 Initialize: |J | = dlog2(KT )e+ 1, JB = {20, 21, . . . , 2dlog2(KT )e}, wi(1) = 1 for i = 1, . . . , |J |. 2 for τ = 1, . . . , ⌈ T H ⌉ do 3 Set pi(τ) = (1− γ) wi(τ)∑|J | j=1 wj(τ) + γ |J | i = 1, . . . , |J | 4 Draw iτ randomly according to the probabilities p1(τ), . . . , p|J |(τ) 5 Run RGA in batch τ with budget JB [iτ ] = 2iτ−1 and optimally tuned restarts 6 Receive reward xit(τ) ∈ [0, H] at the end of the batch 7 for j = 1, . . . , |J | do 8 x̂j(τ) = { xj(τ) pj(τ) if j = iτ 0 otherwise wj(τ + 1) = wj(τ)exp(γx̂j(τ)/(H|J |)) path variation budget (as BT ≤ KT ). In addition, let Bi denote the total path variance within batch i, and B̃ = maxiBi. We state the following: Theorem 4. Suppose that the variation budget BT is unknown in advance to us. In addition, suppose that the maximal blocking duration D̃ ≥ 1 such that D̃BT ∈ o(T ). The α-regret of RGA-META, where α = 1 1+D̃ , is at most O ( B̃1/2T 3/4(2D̃ +K)1/4 ln(KT )1/4 ln(ln(KT ))1/4 ) when the parameters of RGA-META are set as follows: H = √ T (2D̃ +K) ln(KT ) ln(ln(KT )) , γ = min { 1, √ ln(KT ) ln(ln(KT )) (e− 1)T } . Note that since B̃ ≤ HK by definition (the maximum path variance within a batch is at most HK), by setting H = √ T (2D̃+K) ln(KT ) ln(ln(KT )) we always get sub-linear regret in T if D̃ ∈ O(1) (i.e., is bounded above by a constant). Otherwise we need to have B̃2D̃ ∈ o(T ). Furthermore, when B̃ is small, our regret bound tends to O(T 3/4). Thus, it is still an open question whether we can get a tighter upper bound (e.g., O( √ T )) for this case (i.e., when the variation budget is unknown). 5 Discussions In this section we will provide some intuitions why we set BT and D̃ to be small in the previous sections. In particular, we show that if either the variation budget or the maximum blocking duration is large, the lower bound of the α-regret is Θ(T ). We also discuss a potential lower bound for the α-regret of the adversarial blocking bandit problem in the case of BT ∈ o(KT ) and D̃ ∈ O(1). Finally, we will also discuss how our results change if we use other types of variation budgets. Large variation budget. Consider the case when BT ∈ Θ(T ). Theorem 3 indicates that the upper bound of the α-regret is Θ(T ) where α = 1 1+D̃ as defined in Theorem 3. Indeed, we show that this is the best possible we can achieve: Claim 1. For any T > 0 and BT ∈ Θ(KT ), there exists a sequence of rewards and blocking durations X and D such thatRαπ(BT , D̃, T ) = Θ(T ) for that particular (X,D). Large blocking durations. If D̃ ∈ Θ(T ) and α is the approximation ratio of Greedy-BAA: Claim 2. For any T > 0 and D̃ ∈ Θ(T ), there exists a sequence of rewards and blocking durations X and D such thatRαπ(BT , D̃, T ) = Θ(T ) for that particular (X,D). Note that our regret bounds only make sense if D̃BT ∈ o(T ). Thus, it is still an open question whether we can achieve sub-linear α-regret bounds in T if both BT , D̃ ∈ o(T ) but D̃BT ∈ Ω(T ). Almost matching regret lower bound for RGA. Consider the case when D̃ = O(1). This implies that the α-regret bound of RGA is reduced to O( √ KTBT ). This in fact matches the known lower bounds of the 1-regret for the case of D̃ = 1 (i.e., no blocking) from the literature [Auer et al., 2019]. In particular, with D̃ = 1, the Greedy-BAA algorithm becomes optimal (see, e.g., Section 4.3 of Basu et al. [2019] for the discussion of this), and thus, the α-regret notion becomes 1-regret. Therefore, if there exists an algorithm which could achive an α-regret better thanO( √ KTBT ) in our setting, then it would be able to achieve O( √ KTBT ) 1-regret for the standard (i.e., non-blocking) adversarial bandit as well. It is also worth noting that when D̃ is not bounded above by a constant, or the variation budget BT is not known in advance, it is still not known what the regret lower bound would be. Other variation budget definitions. There are a number of different variation budget definitions in the literature [Besbes et al., 2014, Wei and Luo, 2018, Auer et al., 2019]. It is worth noting that our analysis works in a similar way for the maximum variation budget BmaxT and number of changes budget LT , which can be defined as follows: BmaxT = ∑ t,t+1∈T max k∈K |Xkt+1 −Xkt |, LT = #{t : 1 ≤ t ≤ T − 1,∃k : Xkt 6= Xkt+1} If we use these variation budgets instead, the regret in Theorem 3 will be modified to O (√ (2D̃ +K)TBmaxT ) and O (√ (2D̃ +K)TLT ) , respectively. Furthermore, the ap- proximation ratio of Greedy-BAA will also change. In particular, it becomes [ ( 1 + D̃ ) + D̃KBmaxT /r(π +) ]−1 and [ ( 1 + D̃ ) + D̃KLT /r(π +) ]−1 . We refer the reader to Section ?? in the appendix for a more detailed discussion. It remains as future work to derive regret bounds for the other variation budgets. Broader Impact The paper examines a novel multi-armed bandit problem in which the decision-making agent aims to receive as many (cumulative) rewards as possible over a finite period subject to constraints. Our focus and results are largely theoretical. In particular, our contributions advance our understanding of multi-armed bandit models and its theoretical limitations and benefit the general (theoretical) machine learning community, specifically the multi-armed bandit and online learning communities. In addition, we do not expect that our theoretical findings can be directly used in more applied domains. Acknowledgments and Disclosure of Funding Nicholas Bishop was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Partnership grant. Debmalya Mandal was supported by a Columbia Data Science Institute Post-Doctoral Fellowship.
1. What is the focus and contribution of the paper regarding the blocking bandits model? 2. What are the strengths of the proposed approach, particularly in its thoroughness and exploration of natural variations? 3. What are the weaknesses of the paper, especially in its lack of discussion on the relation to scheduling and the classical approximation algorithms? 4. Do you have any concerns or questions regarding the paper's treatment of the offline and online parts? 5. How does the reviewer assess the novelty and significance of the proposed algorithm in the context of bandit feedback and learning?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The blocking bandits model was introduced in a paper by Basu et al [Neurips'19] for the stochastic setting. This paper studies the same problem in the adversarial setting. Blocking bandits is a MAB setting where an arm is blocked for some rounds after it is pulled (for example, if we schedule a job to a server, the server will be unavailable for some time while it completes the job). The contributions are as follows: (i) offline: first the authors ignore the learning aspect of the problem and treat it as an offline optimization problem where rewards and blocking times are known ahead of time. In this setting they show that the problem is strongly NP hard (ii) online: the they treat it as an online algorithms problem and propose an approximation by a greedy policy. This is equivalent to a learning setting with full feedback (iii) bandit feedback: finally with bandit feedback (where they only learn the rewards and blocking times for the arm pulled) they give a reduction to the online algorithm that works as follows. The time horizon is divided into epochs and in each epoch, a bunch of arms are pulled to get samples of the rewards and blocking times and then those are used in an instance of the Greedy Algorithm. The algorithm needs to know a bound on the path length variation. (iv) if the bound of the path length variation variation is not known then an Exp3 algorithm is added on top to select the best bound. Strengths * the setting is interesting and models well several nice application. * the authors are very thorough in the sense that they explore all natural variations about the problem Weaknesses * for the offline and online part, I miss a discussion on the relation to scheduling. There is a vast literature on solving allocation problems with blocking constraints (see this excellent book https://arxiv.org/pdf/2001.06005.pdf for example). Many of those variations are known to be NP-hard and have classical approximation algorithms. While i didn't try myself to map it to the correct problem, I feel the authors should at least compare to the main problems in scheduling and argue why it is different (if it is). My hunch is that it may be a special case of one of those problems. * I think the learning angle (and bandit feedback) is new and interesting, but I am somewhat underwhelmed by the actual algorithm. It seems like the main idea follows the standard reduction form bandit to full feedback (with some non-trivial adaptation, but the main idea still seems like the standard reduction).
NIPS
Title Analytically Tractable Bayesian Deep Q-Learning Abstract Reinforcement learning (RL) has gained increasing interest since the demonstration 1 it was able to reach human performance on video game benchmarks using deep 2 Q-learning (DQN). The current consensus for training neural networks on such 3 complex environments is to rely on gradient-based optimization. Although alterna4 tive Bayesian deep learning methods exist, most of them still rely on gradient-based 5 optimization, and they typically do not scale on benchmarks such as the Atari game 6 environment. Moreover none of these approaches allow performing the analytical 7 inference for the weights and biases defining the neural network. In this paper, we 8 present how we can adapt the temporal difference Q-learning framework to make 9 it compatible with the tractable approximate Gaussian inference (TAGI), which 10 allows learning the parameters of a neural network using a closed-form analytical 11 method. Throughout the experiments with onand off-policy reinforcement learn12 ing approaches, we demonstrate that TAGI can reach a performance comparable to 13 backpropagation-trained networks while using fewer hyperparameters, and without 14 relying on gradient-based optimization. 15 N/A Reinforcement learning (RL) has gained increasing interest since the demonstration1 it was able to reach human performance on video game benchmarks using deep2 Q-learning (DQN). The current consensus for training neural networks on such3 complex environments is to rely on gradient-based optimization. Although alterna-4 tive Bayesian deep learning methods exist, most of them still rely on gradient-based5 optimization, and they typically do not scale on benchmarks such as the Atari game6 environment. Moreover none of these approaches allow performing the analytical7 inference for the weights and biases defining the neural network. In this paper, we8 present how we can adapt the temporal difference Q-learning framework to make9 it compatible with the tractable approximate Gaussian inference (TAGI), which10 allows learning the parameters of a neural network using a closed-form analytical11 method. Throughout the experiments with on- and off-policy reinforcement learn-12 ing approaches, we demonstrate that TAGI can reach a performance comparable to13 backpropagation-trained networks while using fewer hyperparameters, and without14 relying on gradient-based optimization.15 1 Introduction16 Reinforcement learning (RL) has gained increasing interest since the demonstration it was able to17 reach human performance on video game benchmarks using deep Q-learning (DQN) [17, 26]. Deep18 RL methods typically require an explicit definition of an exploration-exploitation function in order to19 compromise between using the current policy and exploring the potential of new actions. Such an20 issue can be mitigated by opting for a Bayesian approach where the selection of the optimal action to21 follow is based on Thompson sampling [23]. Bayesian deep learning methods based on variational22 inference [12, 10, 5, 14, 20, 29], Monte-Carlo dropout [8], or Hamiltonian Monte-Carlo sampling23 [18] have shown to perform well on regression and classification benchmarks, despite being generally24 computationally more demanding than their deterministic counterparts. Note that none of these25 approaches allow performing the analytical inference for the weights and biases defining the neural26 network. Goulet et al. [9] recently proposed the tractable approximate Gaussian inference (TAGI)27 method which allows learning the parameters of a neural network using a closed-form analytical28 method. For convolutional architectures applied on classification benchmarks, this approach was29 shown to exceed the performance of other Bayesian and deterministic approaches based on gradient30 backpropagation, and to do so while requiring a smaller number of training epochs [19].31 In this paper, we present how can we adapt the temporal difference Q-learning framework [24, 28] to32 make it compatible with TAGI. Section 2 first reviews the theory behind TAGI and the expected value33 formulation through the Bellman’s Equation. Then, we present how the action-value function can34 be learned using TAGI. Section 3 presents the related work associated with Bayesian reinforcement35 learning, and Section 4 compares the performance of a simple TAGI-DQN architecture with the one36 obtained for its backpropagation-trained counterpart.37 Submitted to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Do not distribute. 2 TAGI-DQN Formulation38 This section presents how to adapt the DQN frameworks in order to make them compatible with39 analytical inference. First, Section 2.1 reviews the fundamental theory behind TAGI, and Section 2.140 reviews the concept of long-term expected value through the Bellman’s equation [25]. Then, Section41 2.3 presents how to make the Q-learning formulation [28] compatible with TAGI.42 2.1 Tractable Approximate Gaussian Inference43 TAGI [9] relies on two main steps; forward uncertainty propagation and backward update. The44 first forward uncertainty propagation step is intended to build the joint prior between the neural45 network parameters and the hidden states. This operation is made by propagating the uncertainty46 from the model parameters and the input layer through the neural network. TAGI relies on the47 Gaussian assumption for the prior of parameters as well as for the variables in the input layer. In order48 to maintain the analytical tractability of the forward step, we rely on the Gaussian multiplicative49 approximation (GMA) which consists in approximating the product of two Gaussians by a Gaussian50 random variable whose moments match those calculated exactly using moment generating functions.51 In order to propagate uncertainty through non-linear activation functions, a second approximation52 made by locally linearizing these function at the expected value of the hidden unit being activated.53 Although this linearization procedure may seems to be a crude approximation, it has been shown to54 match or exceeds the state-of-the-art performance on fully-connected neural networks (FNN) [9],55 as well as convolutional neural networks (CNN) and generative adversarial networks [19]. TAGI56 succeeds in maintaining a linear computational complexity for the forward steps, (1) by assuming57 a diagonal covariance for all parameters in the network and for all the hidden units within a same58 layer, and (2) by adopting a layer-wise approach where the joint prior is only computed and stored for59 the hidden units on pairs of successive hidden layers, as well as the hidden units within a layer and60 the parameters connecting into it. This layer-wise approach is allowed by the inherent conditional61 independence that is built-in feed-forward neural network architectures.62 The second backward update-step consists in performing layer-wise recursive Bayesian inference63 which goes from hidden-layer to hidden-layer and from hidden-layer to the parameters connecting64 into it. Given the Gaussian approximation for the joint prior throughout the network, the inference65 can be done analytically while still maintaining a linear computational complexity with respect to the66 number of weight parameters in the network. TAGI allows inferring the diagonal posterior knowledge67 for weights and bias parameters, either using one observation at a time, or using mini-batches of68 data. As we will show in the next sections, this online learning capacity is best suited for RL69 problems where we experience episodes sequentially and where we need to define a tradeoff between70 exploration and exploitation, as a function of our knowledge of the expected value associated with71 being in a state and taking an action.72 2.2 Expected Value and Bellman’s Equation73 We define r(s, a, s′) as the reward for being in a state s ∈ RS, taking an action a ∈ A =74 {a1, a2, · · · aA}, and ending in a state s′ ∈ RS. For simplicity, we use the short-form notation75 for the reward r(s, a, s′) ≡ r(s) in order to define the value as the infinite sum of discounted rewards76 77 v(s) = ∞∑ k=0 γkr(st+k). (1) As we do not know what will be the future states st+k for k > 0, we need to consider them as random78 variables (St+k), so that the value V (st) becomes a random variable as well,79 V (st) = r(st) + ∞∑ k=1 γkr(St+k). (2) Rational decisions regarding which action to take among the set A is based the maximization of the80 expected value as defined by the action-value function81 q(st, at) = µV ≡ E[V (st, at, π)] = r(st) + E [ ∞∑ k=1 γkr(St+k) ] , (3) where it is assumed that at each time t, the agent takes the action defined in the policy π. In the case82 of episode-based learning where the agent interacts with the environment, we assume we know the83 tuple of states st and st+1, so that we can redefine the value as84 V (st, at) = r(st) + γ ( r(st+1) + ∞∑ k=1 γkr(St+1+k) ) = r(st) + γV (st+1, at+1). (4) Assuming that the value V ∼ N (v;µV , σ2V ) in Equations 2 and 4 is described by Gaussian random85 variables, we can reparameterize these equations as the sum of the expected value q(s, a) and a86 zero-mean Gaussian random variable E ∼ N ( ; 0, 1), so that87 V (s, a) = q(s, a) + σV E , (5) where the variance σ2V and E are assumed here to be independent of s and a. Although in a more88 general framework this assumption could be relaxed, such an heteroscedastic variance term is outside89 from the scope of this paper. Using this reparameterization, we can write Equation 4 as the discounted90 difference between the expected values of two subsequent states91 q(st, at) = r(st) + γq(st+1, at+1)− σVtEt + γσVt+1Et+1 = r(st) + γq(st+1, at+1) + σV E . (6) Note that in Equation 6, σVt and γσVt+1 can be combined in a single standard deviation parameters92 σV with the assumption that Ei ⊥ Ej ,∀i 6= j.93 In the case where at a time t, we want to update the Q-values encoded in the neural net only after94 observing n-step returns [15], we can reformulate the observation equation so that95 q(st, at) = n−t−1∑ i=0 γir(st+i) + γ n−tq(sn, an) + σV Et,∀t = {1, 2, · · · , n− 1}. (7) Note that in the application of Equation 7, we employ the simplifying assumption that Et ⊥ Et+i,∀i 6=96 0, as Equation 6 already makes simplifying assumptions for the independence of σ2V and E . Note97 that in a more general framework, this assumption could be relaxed. An example of n-step returns is98 presented in the the algorithm displayed in §1 from the supplementary material.99 The following subsections will present, for the case of categorical actions, how to model the deter-100 ministic action-value function q(s, a) using a neural network.101 2.3 TAGI Deep Q-learning for Categorical Actions102 Suppose we represent the environment’s state at a time t and t+ 1 by {s, s′}, and the expected value103 for each of the A possible actions a ∈ A by the vector q ∈ RA. In that context, the role of the neural104 network is to model the relationships between {s, a} and q. Figure 1a presents a directed acyclic105 graph (DAG) describing the interconnectivity in such a neural network, where red nodes denote state106 variables, green nodes are vectors of hidden units z, the blue box is a compact representation for107 the structure of a convolutional neural network, and where gray arrows represent the weights and108 bias θ connecting the different hidden layers. Note that unlike other gray arrows, the red ones in109 (b) are not directed arcs representing dependencies, but they simply outline the flow of information110 that takes place during the inference step. For simplification purposes, the convolutional operations111 are omitted and all regrouped under the CNN box [19]. In order to learn the parameters θ of such a112 network, we need to expand the graph from Figure 1a to include the reward r, the error term σV ,113 and q′, the q-values of the time step t+ 1. This configuration is presented in Figure 1b where the114 nodes that have been doubled represent the states s and s′ which are both evaluated in a network115 sharing the same parameters. When applying Equation 6, q-values corresponding to a specific action116 can be selected using a vector hi ∈ {0, 1}A having a single non-zero value for the i-th component117 identifying which action was taken at a time t so that118 qi = [q]i = h ᵀ i q. (8) During the network’s training, analogously to Thompson sampling [23], the vector h′i ∈ {0, 1}A is119 defined such that the i-th non-zero value corresponds to the index of the largest value among q′, a120 s CNN z(1) z(2) qθ(c0) θ(0) θ(1) θ(q) (a) Neural network DAG for modelling the action-value function q vector of realizations from the neural network’s posterior predictive outputQ ∼ N (q′;µQ|D,ΣQ|D).121 Because of the Gaussian assumptions in TAGI, this posterior predictive is readily available from the122 forward uncertainty propagation step, as outlined in §2.1.123 The red arrows in Figure 1b outline the flow of information during the inference procedure. The first124 step consists in inferring q using the relationships defined in either Equation 6 or 7. As this is a linear125 equation involving Gaussian random variables, the inference is analytically tractable. From there, one126 can follow the same layer-wise recursive procedure proposed by Goulet et al. [9] in order to learn127 the weights and biases in θ. With the exclusion of the standard hyperparameters related to network128 architecture, batch size, buffer size or the discount factor, this TAGI-DQN framework only involves a129 single hyperparameter, σV , the standard deviation for the value function. Note that when using CNNs130 with TAGI, Nguyen and Goulet [19] recommended using a decay function for the standard deviation131 of the observation noise so that at after seing e batches of n-steps,132 σeV = max(σ min V , η · σV )e−1. (9) The model in Equation 9 has three hyperparameters, the minimal noise parameter σminV , the decay133 factor η and the initial noise parameter σV . As it was shown by Nguyen and Goulet [19] for CNNs134 and how we show in §4 for RL problems, TAGI’s performance is robust towards the selection of these135 hyperparameters.136 A comparison of implementation between TAGI and backpropagation on deep Q-network with137 experience replay [17] is shown in Figure 2. A practical implementation of n-step TAGI deep138 Q-learning is presented in Algorithm 1 from the supplementary material.139 3 Related Works140 Over the last decades, several approximate methods have been proposed in order to allow for Bayesian141 neural networks [18, 12, 10, 5, 14, 20, 29, 8] with various degree of approximations. Although some142 these methods have shown to be capable of tackling classification tasks on datasets such ImageNet143 [20], few of them have been applied on large-scale RL benchmark problems. The key idea behind144 using Bayesian methods for reinforcement learning is to consider the uncertainty associated with145 Q-functions in order to identify a tradeoff between exploring the performance of possible actions and146 exploiting the current optimal policy [25]. This typically takes the form of performing Thompson147 sampling [23] rather than relying on heuristics such as -greedy.148 For instance, MC dropout [8] was introduced has a method intrinsically suited for reinforcement149 learning. Nevertheless, five years after its inception, the approach has not yet been reliably scaled150 to more advanced benchmarks such as the Atari game environment. The same applies to Bayes-151 by-backprop [5] which was recently applied to simple RL problems [13], and which has not yet152 been applied to more challenging environments requiring convolutional networks. On the other153 hand, Bayesian neural networks relying on sampling methods such as Hamiltonian Monte-Carlo154 [18] are typically computationally demanding to be scaled to RL problems involving such a complex155 environment.156 Although mainstream methods related to Bayesian neural networks have seldom been applied to157 complex RL problems, several research teams have worked on alternative approaches in order to158 allow performing Thompson sampling. For instance, Azizzadenesheli et al. [4] have employed a deep159 Q-network where the output layer relies on Bayesian linear regression. This approach was shown160 to be outperforming its deterministic counterparts on Atari games. Another approach by Osband et161 al. [21] employs bootstrapped deep Q-networks with multiple network heads in order to represent162 the uncertainty in the Q-functions. This approach was also shown to scale to Atari games while163 presenting an improved performance in comparison with deterministic deep Q-networks. Finally,164 Wang and Zhou [27] have tackled the same problem, but this time by modelling the variability in the165 Q-functions through a latent space learned using variational inference. Despite its good performance166 on the benchmarks tested, it did not allowed to be scaled to the Atari game environment.167 The TAGI deep Q-network presented in th is paper is the first demonstration that an analytically168 tractable inference approach for Bayesian neural networks can be scaled to a problem as challenging169 as the Atari game environment.170 4 Benchmarks171 This section compares the performance of TAGI with backpropagation-based standard implementa-172 tions on off- and on-policy deep RL. For the off-policy RL, both TAGI-based and backpropagation-173 based RL approaches are applied to deep Q-learning with experience replay (see Algorithm 1&2)174 for the lunar lander and cart pole environments. For the on-policy RL, TAGI is applied to the n-step175 Q-learning algorithm and is compared with its backpropagation-based counterpart [15]. We perform176 the comparison for five Atari games including Beamrider, Breakout, Pong, Qbert, and Space Invaders.177 Note that these five games are commonly selected for tuning hyperparameters for the entire Atari178 games [15, 16]. All benchmark environments are taken from the OpenAI Gym [6].179 4.1 Experimental Setup180 In the first experiments with off-policy RL, we use a fully-connected multilayer perceptron (MLP)181 with two hidden layers of 256 units for the lunar lander environment, and with one hidden layer of182 64 units for the cart pole environment. In these experiments, there is no need for input processing183 nor for reward normalization. Note that unlike for the deterministic Q-network, TAGI does not use a184 target Q-network for ensuring the stability during training and allows eliminating the hyperparameter185 related to the target update frequency. For the deep Q-network trained with backpropagation, we186 employ the pre-tuned implementation of OpenAI baselines [7] with all hyperparameters set to the187 default values.188 For the Atari experiments with on-policy RL, we use the same input processing and model architecture189 as Mnih et al. [15]. The Q-network uses two convolutional layers (16-32) and a full-connected MLP190 of 256 units. TAGI n-step Q-learning only uses a single network to represent the value function for191 each action, and relies on a single learning agent. The reason behind this choice is that TAGI current192 main library is only available on Matlab which does not support running a Python multiprocessing193 module such as the OpenAI gym. In the context of TAGI, we use an horizon of 128 steps and as194 recommended by Andrychowicz et al. [3] and following practical implementation details [1, 2],195 each return in n-step Q-learning algorithm is normalized by subtracting the average return from196 the current n-steps and then dividing by the empirical standard deviation from the set of n returns.197 The standard deviation for the value function, (σV ), is initialized at 2. σV is decayed each 128198 steps with a factor η = 0.9999. The minimal standard deviation for the value function σminV = 0.3.199 These hyperparameters values were not grid-searched but simply adapted to the scale of the problems200 and are kept constant for all experiments. The complete details of the network architecture and201 hyperparameters are provided in the supplementary material.202 4.2 Results203 For the first set of experiments using off-policy RL, Figure 3 presents the average reward over204 100 episodes for three runs for the lunar lander and cart pole environment. The TAGI-based deep205 Q-learning with experience replay shows a faster and more stable learning than the one relying on206 backpropagation, while not requiring a target network. 207 Table 1 shows that the average reward over the last 100 episodes obtained using TAGI are greater208 than the one obtained using backpropagation. 209 Figure 4 compares the average reward over 100 episodes for three runs obtained for TAGI, with210 the results from Mnih et al. [15] for the second set of experiments on Atari games. Note that all211 results presented were obtained for a single agent, and that the results for the backpropagation-trained212 networks are only reported at the end of each epoch. 213 Results show that TAGI outperforms the results from the original n-step Q-learning algorithm trained214 with backpropagation [15] on Breakout, Pong, and Qbert, while underperforming on Beam Rider215 and Space Invaders. The average training time of TAGI for an Atari game is approximately 13 hours216 on GPU calculations benchmarked on a 4-core-intel desktop of 32GB of RAM with a NVIDIA217 GTX 1080 Ti GPU. The training speed of TAGI for the experiment of the off-policy deep RL is218 approximately three times slower on CPU calculations than the backpropagation-trained counterpart.219 The reason behind this slower training time is because of its intrinsically different inference engine, so220 that TAGI’s implementation is not compatible with existing libraries such as TensorFlow or Pytorch.221 TAGI’s library development is still ongoing and it is not yet fully optimized for computational222 efficiency. Overall, these results for on- and off policy RL approaches confirm that TAGI can be223 applied to large scale problems such as deep Q-learning.224 5 Discussion225 Although the performance of TAGI does not systematically outperform its backpropagation-based226 counterpart, it requires fewer hyperparameters (see §3 in supplementary material). This advantage227 is one of the key aspects for improving the generalization and reducing the computational cost of228 the hyperparameter tuning process which are the key challenges in current state of deep RL [11].229 For instance, in this paper, the TAGI’s hyperparameters relating to the standard deviation of value230 function (σV ) are kept constant across all experiments. Moreover, since these hyperparameters231 were not subject to grid-search in order to optimize the performance, the results obtained here232 are representative of what a user should obtain by simply adapting the hyperparameters to fit the233 specificities and scale of the environment at hand.234 More advanced RL approaches such as advanced actor critic (A2C) [15] and proximal policy opti-235 mization (PPO) [22] employ two-networks architectures in which one network is used to approximate236 a value function and other is employed to encode the policy. The current TAGI-RL framework is237 not yet able to handle such architectures because training a policy network involves an optimization238 problem for the selection of the optimal action. Backpropagation-based approach currently rely on239 gradient optimization to perform this task, while TAGI will require developing alternative approaches240 in order to maintain the analytical tractability without relying on gradient-based optimization.241 6 Conclusion242 This paper presents how to adapt TAGI to deep Q-learning; Throughout the experiments, we demon-243 strated that TAGI could reach a performance comparable to backpropagation-trained networks while244 using fewer hyperparameters. These results challenge the common belief that for large scale problems245 such as the Atari environment, neural networks can only be trained by relying on gradient backpropa-246 gation. We have shown here that this current paradigm is no longer the only alternative as TAGI has a247 linear computational complexity and can be used to learn the parameters complex networks in an248 analytically tractable manner, without relying on gradient-based optimization.249 References250 [1] Pytorch examples for reinforce algorithm. https://github.com/pytorch/examples/blob/master/251 reinforcement_learning/reinforce.py, 2019.252 [2] Pytorch examples for actor crtic algorithm. https://github.com/pytorch/examples/blob/master/253 reinforcement_learning/actor_critic.py, 2020.254 [3] M. Andrychowicz, A. Raichuk, P. Stańczyk, M. Orsini, S. Girgin, R. Marinier, L. Hussenot, M. Geist,255 O. Pietquin, M. Michalski, S. Gelly, and O. Bachem. What matters for on-policy deep actor-critic methods?256 a large-scale study. In International Conference on Learning Representations, 2021.257 [4] K. Azizzadenesheli, E. Brunskill, and A. Anandkumar. Efficient exploration through Bayesian deep258 q-networks. In IEEE Information Theory and Applications Workshop, pages 1–9, 2018.259 [5] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural networks. arXiv260 preprint arXiv:1505.05424, 2015.261 [6] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym.262 arXiv preprint arXiv:1606.01540, 2016.263 [7] P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, Y. Wu, and264 P. Zhokhov. Openai baselines. https://github.com/openai/baselines, 2017.265 [8] Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep266 learning. In ICML proceedings, pages 1050–1059, 2016.267 [9] J-A. Goulet, L.H. Nguyen, and S. Amiri. Tractable approximate Gaussian inference for Bayesian neural268 networks. arXiv preprint, 2020.269 [10] J. M. Hernández-Lobato and R. Adams. Probabilistic backpropagation for scalable learning of bayesian270 neural networks. In International Conference on Machine Learning, pages 1861–1869, 2015.271 [11] A. Irpan. Deep reinforcement learning doesn’t work yet. https://www.alexirpan.com/2018/02/14/272 rl-hard.html, 2018.273 [12] D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization trick. In274 C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information275 Processing Systems, volume 28, 2015.276 [13] Z. Lipton, X. Li, J. Gao, L. Li, F. Ahmed, and L. Deng. Bbq-networks: Efficient exploration in deep277 reinforcement learning for task-oriented dialogue systems. In Proceedings of the AAAI Conference on278 Artificial Intelligence, volume 32, 2018.279 [14] C. Louizos and M. Welling. Structured and efficient variational deep learning with matrix Gaussian280 posteriors. In ICML proceedings, pages 1708–1716, 2016.281 [15] V. Mnih, Adria P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu.282 Asynchronous methods for deep reinforcement learning. In ICML proceedings, pages 1928–1937. PMLR,283 2016.284 [16] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing285 atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, December 2013.286 [17] V. Mnih, K. Kavukcuoglu, D. Silver, A.A. Rusu, J. Veness, M.G. Bellemare, A. Graves, M. Riedmiller,287 A.K. Fidjeland, and G Ostrovski. Human-level control through deep reinforcement learning. nature,288 518(7540):529–533, 2015.289 [18] R. M. Neal. Bayesian learning for neural networks. PhD thesis, University of Toronto, 1995.290 [19] L. H. Nguyen and J-A. Goulet. Analytically tractable inference in deep neural networks. arXiv preprint,291 2021.292 [20] K. Osawa, S. Swaroop, A. Jain, R. Eschenhagen, R. E. Turner, R. Yokota, and M. E. Khan. Practical deep293 learning with Bayesian principles. In Advances in Neural Information Processing Systems proceedings,294 2019.295 [21] I. Osband, C. Blundell, A. Pritzel, and Benjamin V. Roy. Deep exploration via bootstrapped dqn. In296 NEURIPS proceedings, pages 4033–4041, 2016.297 [22] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms.298 arXiv preprint arXiv:1707.06347, 2017.299 [23] M. Strens. A Bayesian framework for reinforcement learning. In ICML proceedings, pages 943–950, 2000.300 [24] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3(1):9–44,301 1988.302 [25] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT Press, 2nd edition, 2018.303 [26] H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. In Proceedings304 of the AAAI Conference on Artificial Intelligence, volume 30, 2016.305 [27] Z. Wang and M. Zhou. Thompson sampling via local uncertainty. In ICML proceedings, volume 119,306 pages 10115–10125, 13–18 Jul 2020.307 [28] C. J. Watkins and P. Dayan. Q-learning. Machine learning, 8(3-4):279–292, 1992.308 [29] A. Wu, S. Nowozin, E. Meeds, R. E. Turner, J. M. Hernández-Lobato, and A. L. Gaunt. Deterministic309 variational inference for robust Bayesian neural networks. In ICLR proceedings, 2019.310 Checklist311 1. For all authors...312 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contribu-313 tions and scope? [Yes]314 (b) Did you describe the limitations of your work? [Yes]315 (c) Did you discuss any potential negative societal impacts of your work? [N/A]316 (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]317 2. If you are including theoretical results...318 (a) Did you state the full set of assumptions of all theoretical results? [N/A]319 (b) Did you include complete proofs of all theoretical results? [N/A]320 3. If you ran experiments...321 (a) Did you include the code, data, and instructions needed to reproduce the main experimental322 results (either in the supplemental material or as a URL)? [No] The code will be made available323 upon the publication of the paper324 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?325 [Yes]326 (c) Did you report error bars (e.g., with respect to the random seed after running experiments327 multiple times)? [Yes]328 (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs,329 internal cluster, or cloud provider)? [Yes]330 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...331 (a) If your work uses existing assets, did you cite the creators? [Yes]332 (b) Did you mention the license of the assets? [N/A]333 (c) Did you include any new assets either in the supplemental material or as a URL? [No]334 (d) Did you discuss whether and how consent was obtained from people whose data you’re us-335 ing/curating? [N/A]336 (e) Did you discuss whether the data you are using/curating contains personally identifiable informa-337 tion or offensive content? [N/A]338 5. If you used crowdsourcing or conducted research with human subjects...339 (a) Did you include the full text of instructions given to participants and screenshots, if applicable?340 [N/A]341 (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB)342 approvals, if applicable? [N/A]343 (c) Did you include the estimated hourly wage paid to participants and the total amount spent on344 participant compensation? [N/A]345
1. What is the focus of the paper regarding using TAGI in DQN? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the novelty and performance of the proposed method? 4. Do you have any concerns about the comparison with other works and the description of the algorithm?
Summary Of The Paper Review
Summary Of The Paper The paper proposes to use TAGI, an approach for training BNNs, within DQN to enable Thompson sampling. Review The paper has very limited novelty, simply plugging TAGI (a previously proposed approach for training BNNs) into DQN. The results are also not particularly impressive, being run on a very limited set of environments and being outperformed by vanilla DQN on several. While having fewer hyperparameters to tune than training the network via gradient descent is a nice benefit, a more thorough evaluation would be needed to demonstrate the advantages of TAGI in reinforcement learning. Why not compare to Thompson Sampling with Bayesian approaches to Q learning like in Azizzadenesheli or Osband et al? This seems like the more direct comparison than vanilla DQN. The authors also describe their learning algorithm as being analytic, but still relies on repeated iterations, so it's unclear to me what the benefit of the "analytic" updates are. If the key benefit here is just the uncertainty is analytically computed when evaluating the network on new inputs, then the authors should compare the evaluation times of TAGI vs other approximate BNN methods that require multiple forward passes.
NIPS
Title Analytically Tractable Bayesian Deep Q-Learning Abstract Reinforcement learning (RL) has gained increasing interest since the demonstration 1 it was able to reach human performance on video game benchmarks using deep 2 Q-learning (DQN). The current consensus for training neural networks on such 3 complex environments is to rely on gradient-based optimization. Although alterna4 tive Bayesian deep learning methods exist, most of them still rely on gradient-based 5 optimization, and they typically do not scale on benchmarks such as the Atari game 6 environment. Moreover none of these approaches allow performing the analytical 7 inference for the weights and biases defining the neural network. In this paper, we 8 present how we can adapt the temporal difference Q-learning framework to make 9 it compatible with the tractable approximate Gaussian inference (TAGI), which 10 allows learning the parameters of a neural network using a closed-form analytical 11 method. Throughout the experiments with onand off-policy reinforcement learn12 ing approaches, we demonstrate that TAGI can reach a performance comparable to 13 backpropagation-trained networks while using fewer hyperparameters, and without 14 relying on gradient-based optimization. 15 N/A Reinforcement learning (RL) has gained increasing interest since the demonstration1 it was able to reach human performance on video game benchmarks using deep2 Q-learning (DQN). The current consensus for training neural networks on such3 complex environments is to rely on gradient-based optimization. Although alterna-4 tive Bayesian deep learning methods exist, most of them still rely on gradient-based5 optimization, and they typically do not scale on benchmarks such as the Atari game6 environment. Moreover none of these approaches allow performing the analytical7 inference for the weights and biases defining the neural network. In this paper, we8 present how we can adapt the temporal difference Q-learning framework to make9 it compatible with the tractable approximate Gaussian inference (TAGI), which10 allows learning the parameters of a neural network using a closed-form analytical11 method. Throughout the experiments with on- and off-policy reinforcement learn-12 ing approaches, we demonstrate that TAGI can reach a performance comparable to13 backpropagation-trained networks while using fewer hyperparameters, and without14 relying on gradient-based optimization.15 1 Introduction16 Reinforcement learning (RL) has gained increasing interest since the demonstration it was able to17 reach human performance on video game benchmarks using deep Q-learning (DQN) [17, 26]. Deep18 RL methods typically require an explicit definition of an exploration-exploitation function in order to19 compromise between using the current policy and exploring the potential of new actions. Such an20 issue can be mitigated by opting for a Bayesian approach where the selection of the optimal action to21 follow is based on Thompson sampling [23]. Bayesian deep learning methods based on variational22 inference [12, 10, 5, 14, 20, 29], Monte-Carlo dropout [8], or Hamiltonian Monte-Carlo sampling23 [18] have shown to perform well on regression and classification benchmarks, despite being generally24 computationally more demanding than their deterministic counterparts. Note that none of these25 approaches allow performing the analytical inference for the weights and biases defining the neural26 network. Goulet et al. [9] recently proposed the tractable approximate Gaussian inference (TAGI)27 method which allows learning the parameters of a neural network using a closed-form analytical28 method. For convolutional architectures applied on classification benchmarks, this approach was29 shown to exceed the performance of other Bayesian and deterministic approaches based on gradient30 backpropagation, and to do so while requiring a smaller number of training epochs [19].31 In this paper, we present how can we adapt the temporal difference Q-learning framework [24, 28] to32 make it compatible with TAGI. Section 2 first reviews the theory behind TAGI and the expected value33 formulation through the Bellman’s Equation. Then, we present how the action-value function can34 be learned using TAGI. Section 3 presents the related work associated with Bayesian reinforcement35 learning, and Section 4 compares the performance of a simple TAGI-DQN architecture with the one36 obtained for its backpropagation-trained counterpart.37 Submitted to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Do not distribute. 2 TAGI-DQN Formulation38 This section presents how to adapt the DQN frameworks in order to make them compatible with39 analytical inference. First, Section 2.1 reviews the fundamental theory behind TAGI, and Section 2.140 reviews the concept of long-term expected value through the Bellman’s equation [25]. Then, Section41 2.3 presents how to make the Q-learning formulation [28] compatible with TAGI.42 2.1 Tractable Approximate Gaussian Inference43 TAGI [9] relies on two main steps; forward uncertainty propagation and backward update. The44 first forward uncertainty propagation step is intended to build the joint prior between the neural45 network parameters and the hidden states. This operation is made by propagating the uncertainty46 from the model parameters and the input layer through the neural network. TAGI relies on the47 Gaussian assumption for the prior of parameters as well as for the variables in the input layer. In order48 to maintain the analytical tractability of the forward step, we rely on the Gaussian multiplicative49 approximation (GMA) which consists in approximating the product of two Gaussians by a Gaussian50 random variable whose moments match those calculated exactly using moment generating functions.51 In order to propagate uncertainty through non-linear activation functions, a second approximation52 made by locally linearizing these function at the expected value of the hidden unit being activated.53 Although this linearization procedure may seems to be a crude approximation, it has been shown to54 match or exceeds the state-of-the-art performance on fully-connected neural networks (FNN) [9],55 as well as convolutional neural networks (CNN) and generative adversarial networks [19]. TAGI56 succeeds in maintaining a linear computational complexity for the forward steps, (1) by assuming57 a diagonal covariance for all parameters in the network and for all the hidden units within a same58 layer, and (2) by adopting a layer-wise approach where the joint prior is only computed and stored for59 the hidden units on pairs of successive hidden layers, as well as the hidden units within a layer and60 the parameters connecting into it. This layer-wise approach is allowed by the inherent conditional61 independence that is built-in feed-forward neural network architectures.62 The second backward update-step consists in performing layer-wise recursive Bayesian inference63 which goes from hidden-layer to hidden-layer and from hidden-layer to the parameters connecting64 into it. Given the Gaussian approximation for the joint prior throughout the network, the inference65 can be done analytically while still maintaining a linear computational complexity with respect to the66 number of weight parameters in the network. TAGI allows inferring the diagonal posterior knowledge67 for weights and bias parameters, either using one observation at a time, or using mini-batches of68 data. As we will show in the next sections, this online learning capacity is best suited for RL69 problems where we experience episodes sequentially and where we need to define a tradeoff between70 exploration and exploitation, as a function of our knowledge of the expected value associated with71 being in a state and taking an action.72 2.2 Expected Value and Bellman’s Equation73 We define r(s, a, s′) as the reward for being in a state s ∈ RS, taking an action a ∈ A =74 {a1, a2, · · · aA}, and ending in a state s′ ∈ RS. For simplicity, we use the short-form notation75 for the reward r(s, a, s′) ≡ r(s) in order to define the value as the infinite sum of discounted rewards76 77 v(s) = ∞∑ k=0 γkr(st+k). (1) As we do not know what will be the future states st+k for k > 0, we need to consider them as random78 variables (St+k), so that the value V (st) becomes a random variable as well,79 V (st) = r(st) + ∞∑ k=1 γkr(St+k). (2) Rational decisions regarding which action to take among the set A is based the maximization of the80 expected value as defined by the action-value function81 q(st, at) = µV ≡ E[V (st, at, π)] = r(st) + E [ ∞∑ k=1 γkr(St+k) ] , (3) where it is assumed that at each time t, the agent takes the action defined in the policy π. In the case82 of episode-based learning where the agent interacts with the environment, we assume we know the83 tuple of states st and st+1, so that we can redefine the value as84 V (st, at) = r(st) + γ ( r(st+1) + ∞∑ k=1 γkr(St+1+k) ) = r(st) + γV (st+1, at+1). (4) Assuming that the value V ∼ N (v;µV , σ2V ) in Equations 2 and 4 is described by Gaussian random85 variables, we can reparameterize these equations as the sum of the expected value q(s, a) and a86 zero-mean Gaussian random variable E ∼ N ( ; 0, 1), so that87 V (s, a) = q(s, a) + σV E , (5) where the variance σ2V and E are assumed here to be independent of s and a. Although in a more88 general framework this assumption could be relaxed, such an heteroscedastic variance term is outside89 from the scope of this paper. Using this reparameterization, we can write Equation 4 as the discounted90 difference between the expected values of two subsequent states91 q(st, at) = r(st) + γq(st+1, at+1)− σVtEt + γσVt+1Et+1 = r(st) + γq(st+1, at+1) + σV E . (6) Note that in Equation 6, σVt and γσVt+1 can be combined in a single standard deviation parameters92 σV with the assumption that Ei ⊥ Ej ,∀i 6= j.93 In the case where at a time t, we want to update the Q-values encoded in the neural net only after94 observing n-step returns [15], we can reformulate the observation equation so that95 q(st, at) = n−t−1∑ i=0 γir(st+i) + γ n−tq(sn, an) + σV Et,∀t = {1, 2, · · · , n− 1}. (7) Note that in the application of Equation 7, we employ the simplifying assumption that Et ⊥ Et+i,∀i 6=96 0, as Equation 6 already makes simplifying assumptions for the independence of σ2V and E . Note97 that in a more general framework, this assumption could be relaxed. An example of n-step returns is98 presented in the the algorithm displayed in §1 from the supplementary material.99 The following subsections will present, for the case of categorical actions, how to model the deter-100 ministic action-value function q(s, a) using a neural network.101 2.3 TAGI Deep Q-learning for Categorical Actions102 Suppose we represent the environment’s state at a time t and t+ 1 by {s, s′}, and the expected value103 for each of the A possible actions a ∈ A by the vector q ∈ RA. In that context, the role of the neural104 network is to model the relationships between {s, a} and q. Figure 1a presents a directed acyclic105 graph (DAG) describing the interconnectivity in such a neural network, where red nodes denote state106 variables, green nodes are vectors of hidden units z, the blue box is a compact representation for107 the structure of a convolutional neural network, and where gray arrows represent the weights and108 bias θ connecting the different hidden layers. Note that unlike other gray arrows, the red ones in109 (b) are not directed arcs representing dependencies, but they simply outline the flow of information110 that takes place during the inference step. For simplification purposes, the convolutional operations111 are omitted and all regrouped under the CNN box [19]. In order to learn the parameters θ of such a112 network, we need to expand the graph from Figure 1a to include the reward r, the error term σV ,113 and q′, the q-values of the time step t+ 1. This configuration is presented in Figure 1b where the114 nodes that have been doubled represent the states s and s′ which are both evaluated in a network115 sharing the same parameters. When applying Equation 6, q-values corresponding to a specific action116 can be selected using a vector hi ∈ {0, 1}A having a single non-zero value for the i-th component117 identifying which action was taken at a time t so that118 qi = [q]i = h ᵀ i q. (8) During the network’s training, analogously to Thompson sampling [23], the vector h′i ∈ {0, 1}A is119 defined such that the i-th non-zero value corresponds to the index of the largest value among q′, a120 s CNN z(1) z(2) qθ(c0) θ(0) θ(1) θ(q) (a) Neural network DAG for modelling the action-value function q vector of realizations from the neural network’s posterior predictive outputQ ∼ N (q′;µQ|D,ΣQ|D).121 Because of the Gaussian assumptions in TAGI, this posterior predictive is readily available from the122 forward uncertainty propagation step, as outlined in §2.1.123 The red arrows in Figure 1b outline the flow of information during the inference procedure. The first124 step consists in inferring q using the relationships defined in either Equation 6 or 7. As this is a linear125 equation involving Gaussian random variables, the inference is analytically tractable. From there, one126 can follow the same layer-wise recursive procedure proposed by Goulet et al. [9] in order to learn127 the weights and biases in θ. With the exclusion of the standard hyperparameters related to network128 architecture, batch size, buffer size or the discount factor, this TAGI-DQN framework only involves a129 single hyperparameter, σV , the standard deviation for the value function. Note that when using CNNs130 with TAGI, Nguyen and Goulet [19] recommended using a decay function for the standard deviation131 of the observation noise so that at after seing e batches of n-steps,132 σeV = max(σ min V , η · σV )e−1. (9) The model in Equation 9 has three hyperparameters, the minimal noise parameter σminV , the decay133 factor η and the initial noise parameter σV . As it was shown by Nguyen and Goulet [19] for CNNs134 and how we show in §4 for RL problems, TAGI’s performance is robust towards the selection of these135 hyperparameters.136 A comparison of implementation between TAGI and backpropagation on deep Q-network with137 experience replay [17] is shown in Figure 2. A practical implementation of n-step TAGI deep138 Q-learning is presented in Algorithm 1 from the supplementary material.139 3 Related Works140 Over the last decades, several approximate methods have been proposed in order to allow for Bayesian141 neural networks [18, 12, 10, 5, 14, 20, 29, 8] with various degree of approximations. Although some142 these methods have shown to be capable of tackling classification tasks on datasets such ImageNet143 [20], few of them have been applied on large-scale RL benchmark problems. The key idea behind144 using Bayesian methods for reinforcement learning is to consider the uncertainty associated with145 Q-functions in order to identify a tradeoff between exploring the performance of possible actions and146 exploiting the current optimal policy [25]. This typically takes the form of performing Thompson147 sampling [23] rather than relying on heuristics such as -greedy.148 For instance, MC dropout [8] was introduced has a method intrinsically suited for reinforcement149 learning. Nevertheless, five years after its inception, the approach has not yet been reliably scaled150 to more advanced benchmarks such as the Atari game environment. The same applies to Bayes-151 by-backprop [5] which was recently applied to simple RL problems [13], and which has not yet152 been applied to more challenging environments requiring convolutional networks. On the other153 hand, Bayesian neural networks relying on sampling methods such as Hamiltonian Monte-Carlo154 [18] are typically computationally demanding to be scaled to RL problems involving such a complex155 environment.156 Although mainstream methods related to Bayesian neural networks have seldom been applied to157 complex RL problems, several research teams have worked on alternative approaches in order to158 allow performing Thompson sampling. For instance, Azizzadenesheli et al. [4] have employed a deep159 Q-network where the output layer relies on Bayesian linear regression. This approach was shown160 to be outperforming its deterministic counterparts on Atari games. Another approach by Osband et161 al. [21] employs bootstrapped deep Q-networks with multiple network heads in order to represent162 the uncertainty in the Q-functions. This approach was also shown to scale to Atari games while163 presenting an improved performance in comparison with deterministic deep Q-networks. Finally,164 Wang and Zhou [27] have tackled the same problem, but this time by modelling the variability in the165 Q-functions through a latent space learned using variational inference. Despite its good performance166 on the benchmarks tested, it did not allowed to be scaled to the Atari game environment.167 The TAGI deep Q-network presented in th is paper is the first demonstration that an analytically168 tractable inference approach for Bayesian neural networks can be scaled to a problem as challenging169 as the Atari game environment.170 4 Benchmarks171 This section compares the performance of TAGI with backpropagation-based standard implementa-172 tions on off- and on-policy deep RL. For the off-policy RL, both TAGI-based and backpropagation-173 based RL approaches are applied to deep Q-learning with experience replay (see Algorithm 1&2)174 for the lunar lander and cart pole environments. For the on-policy RL, TAGI is applied to the n-step175 Q-learning algorithm and is compared with its backpropagation-based counterpart [15]. We perform176 the comparison for five Atari games including Beamrider, Breakout, Pong, Qbert, and Space Invaders.177 Note that these five games are commonly selected for tuning hyperparameters for the entire Atari178 games [15, 16]. All benchmark environments are taken from the OpenAI Gym [6].179 4.1 Experimental Setup180 In the first experiments with off-policy RL, we use a fully-connected multilayer perceptron (MLP)181 with two hidden layers of 256 units for the lunar lander environment, and with one hidden layer of182 64 units for the cart pole environment. In these experiments, there is no need for input processing183 nor for reward normalization. Note that unlike for the deterministic Q-network, TAGI does not use a184 target Q-network for ensuring the stability during training and allows eliminating the hyperparameter185 related to the target update frequency. For the deep Q-network trained with backpropagation, we186 employ the pre-tuned implementation of OpenAI baselines [7] with all hyperparameters set to the187 default values.188 For the Atari experiments with on-policy RL, we use the same input processing and model architecture189 as Mnih et al. [15]. The Q-network uses two convolutional layers (16-32) and a full-connected MLP190 of 256 units. TAGI n-step Q-learning only uses a single network to represent the value function for191 each action, and relies on a single learning agent. The reason behind this choice is that TAGI current192 main library is only available on Matlab which does not support running a Python multiprocessing193 module such as the OpenAI gym. In the context of TAGI, we use an horizon of 128 steps and as194 recommended by Andrychowicz et al. [3] and following practical implementation details [1, 2],195 each return in n-step Q-learning algorithm is normalized by subtracting the average return from196 the current n-steps and then dividing by the empirical standard deviation from the set of n returns.197 The standard deviation for the value function, (σV ), is initialized at 2. σV is decayed each 128198 steps with a factor η = 0.9999. The minimal standard deviation for the value function σminV = 0.3.199 These hyperparameters values were not grid-searched but simply adapted to the scale of the problems200 and are kept constant for all experiments. The complete details of the network architecture and201 hyperparameters are provided in the supplementary material.202 4.2 Results203 For the first set of experiments using off-policy RL, Figure 3 presents the average reward over204 100 episodes for three runs for the lunar lander and cart pole environment. The TAGI-based deep205 Q-learning with experience replay shows a faster and more stable learning than the one relying on206 backpropagation, while not requiring a target network. 207 Table 1 shows that the average reward over the last 100 episodes obtained using TAGI are greater208 than the one obtained using backpropagation. 209 Figure 4 compares the average reward over 100 episodes for three runs obtained for TAGI, with210 the results from Mnih et al. [15] for the second set of experiments on Atari games. Note that all211 results presented were obtained for a single agent, and that the results for the backpropagation-trained212 networks are only reported at the end of each epoch. 213 Results show that TAGI outperforms the results from the original n-step Q-learning algorithm trained214 with backpropagation [15] on Breakout, Pong, and Qbert, while underperforming on Beam Rider215 and Space Invaders. The average training time of TAGI for an Atari game is approximately 13 hours216 on GPU calculations benchmarked on a 4-core-intel desktop of 32GB of RAM with a NVIDIA217 GTX 1080 Ti GPU. The training speed of TAGI for the experiment of the off-policy deep RL is218 approximately three times slower on CPU calculations than the backpropagation-trained counterpart.219 The reason behind this slower training time is because of its intrinsically different inference engine, so220 that TAGI’s implementation is not compatible with existing libraries such as TensorFlow or Pytorch.221 TAGI’s library development is still ongoing and it is not yet fully optimized for computational222 efficiency. Overall, these results for on- and off policy RL approaches confirm that TAGI can be223 applied to large scale problems such as deep Q-learning.224 5 Discussion225 Although the performance of TAGI does not systematically outperform its backpropagation-based226 counterpart, it requires fewer hyperparameters (see §3 in supplementary material). This advantage227 is one of the key aspects for improving the generalization and reducing the computational cost of228 the hyperparameter tuning process which are the key challenges in current state of deep RL [11].229 For instance, in this paper, the TAGI’s hyperparameters relating to the standard deviation of value230 function (σV ) are kept constant across all experiments. Moreover, since these hyperparameters231 were not subject to grid-search in order to optimize the performance, the results obtained here232 are representative of what a user should obtain by simply adapting the hyperparameters to fit the233 specificities and scale of the environment at hand.234 More advanced RL approaches such as advanced actor critic (A2C) [15] and proximal policy opti-235 mization (PPO) [22] employ two-networks architectures in which one network is used to approximate236 a value function and other is employed to encode the policy. The current TAGI-RL framework is237 not yet able to handle such architectures because training a policy network involves an optimization238 problem for the selection of the optimal action. Backpropagation-based approach currently rely on239 gradient optimization to perform this task, while TAGI will require developing alternative approaches240 in order to maintain the analytical tractability without relying on gradient-based optimization.241 6 Conclusion242 This paper presents how to adapt TAGI to deep Q-learning; Throughout the experiments, we demon-243 strated that TAGI could reach a performance comparable to backpropagation-trained networks while244 using fewer hyperparameters. These results challenge the common belief that for large scale problems245 such as the Atari environment, neural networks can only be trained by relying on gradient backpropa-246 gation. We have shown here that this current paradigm is no longer the only alternative as TAGI has a247 linear computational complexity and can be used to learn the parameters complex networks in an248 analytically tractable manner, without relying on gradient-based optimization.249 References250 [1] Pytorch examples for reinforce algorithm. https://github.com/pytorch/examples/blob/master/251 reinforcement_learning/reinforce.py, 2019.252 [2] Pytorch examples for actor crtic algorithm. https://github.com/pytorch/examples/blob/master/253 reinforcement_learning/actor_critic.py, 2020.254 [3] M. Andrychowicz, A. Raichuk, P. Stańczyk, M. Orsini, S. Girgin, R. Marinier, L. Hussenot, M. Geist,255 O. Pietquin, M. Michalski, S. Gelly, and O. Bachem. What matters for on-policy deep actor-critic methods?256 a large-scale study. In International Conference on Learning Representations, 2021.257 [4] K. Azizzadenesheli, E. Brunskill, and A. Anandkumar. Efficient exploration through Bayesian deep258 q-networks. In IEEE Information Theory and Applications Workshop, pages 1–9, 2018.259 [5] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural networks. arXiv260 preprint arXiv:1505.05424, 2015.261 [6] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym.262 arXiv preprint arXiv:1606.01540, 2016.263 [7] P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, Y. Wu, and264 P. Zhokhov. Openai baselines. https://github.com/openai/baselines, 2017.265 [8] Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep266 learning. In ICML proceedings, pages 1050–1059, 2016.267 [9] J-A. Goulet, L.H. Nguyen, and S. Amiri. Tractable approximate Gaussian inference for Bayesian neural268 networks. arXiv preprint, 2020.269 [10] J. M. Hernández-Lobato and R. Adams. Probabilistic backpropagation for scalable learning of bayesian270 neural networks. In International Conference on Machine Learning, pages 1861–1869, 2015.271 [11] A. Irpan. Deep reinforcement learning doesn’t work yet. https://www.alexirpan.com/2018/02/14/272 rl-hard.html, 2018.273 [12] D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization trick. In274 C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information275 Processing Systems, volume 28, 2015.276 [13] Z. Lipton, X. Li, J. Gao, L. Li, F. Ahmed, and L. Deng. Bbq-networks: Efficient exploration in deep277 reinforcement learning for task-oriented dialogue systems. In Proceedings of the AAAI Conference on278 Artificial Intelligence, volume 32, 2018.279 [14] C. Louizos and M. Welling. Structured and efficient variational deep learning with matrix Gaussian280 posteriors. In ICML proceedings, pages 1708–1716, 2016.281 [15] V. Mnih, Adria P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu.282 Asynchronous methods for deep reinforcement learning. In ICML proceedings, pages 1928–1937. PMLR,283 2016.284 [16] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing285 atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, December 2013.286 [17] V. Mnih, K. Kavukcuoglu, D. Silver, A.A. Rusu, J. Veness, M.G. Bellemare, A. Graves, M. Riedmiller,287 A.K. Fidjeland, and G Ostrovski. Human-level control through deep reinforcement learning. nature,288 518(7540):529–533, 2015.289 [18] R. M. Neal. Bayesian learning for neural networks. PhD thesis, University of Toronto, 1995.290 [19] L. H. Nguyen and J-A. Goulet. Analytically tractable inference in deep neural networks. arXiv preprint,291 2021.292 [20] K. Osawa, S. Swaroop, A. Jain, R. Eschenhagen, R. E. Turner, R. Yokota, and M. E. Khan. Practical deep293 learning with Bayesian principles. In Advances in Neural Information Processing Systems proceedings,294 2019.295 [21] I. Osband, C. Blundell, A. Pritzel, and Benjamin V. Roy. Deep exploration via bootstrapped dqn. In296 NEURIPS proceedings, pages 4033–4041, 2016.297 [22] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms.298 arXiv preprint arXiv:1707.06347, 2017.299 [23] M. Strens. A Bayesian framework for reinforcement learning. In ICML proceedings, pages 943–950, 2000.300 [24] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3(1):9–44,301 1988.302 [25] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT Press, 2nd edition, 2018.303 [26] H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. In Proceedings304 of the AAAI Conference on Artificial Intelligence, volume 30, 2016.305 [27] Z. Wang and M. Zhou. Thompson sampling via local uncertainty. In ICML proceedings, volume 119,306 pages 10115–10125, 13–18 Jul 2020.307 [28] C. J. Watkins and P. Dayan. Q-learning. Machine learning, 8(3-4):279–292, 1992.308 [29] A. Wu, S. Nowozin, E. Meeds, R. E. Turner, J. M. Hernández-Lobato, and A. L. Gaunt. Deterministic309 variational inference for robust Bayesian neural networks. In ICLR proceedings, 2019.310 Checklist311 1. For all authors...312 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contribu-313 tions and scope? [Yes]314 (b) Did you describe the limitations of your work? [Yes]315 (c) Did you discuss any potential negative societal impacts of your work? [N/A]316 (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]317 2. If you are including theoretical results...318 (a) Did you state the full set of assumptions of all theoretical results? [N/A]319 (b) Did you include complete proofs of all theoretical results? [N/A]320 3. If you ran experiments...321 (a) Did you include the code, data, and instructions needed to reproduce the main experimental322 results (either in the supplemental material or as a URL)? [No] The code will be made available323 upon the publication of the paper324 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?325 [Yes]326 (c) Did you report error bars (e.g., with respect to the random seed after running experiments327 multiple times)? [Yes]328 (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs,329 internal cluster, or cloud provider)? [Yes]330 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...331 (a) If your work uses existing assets, did you cite the creators? [Yes]332 (b) Did you mention the license of the assets? [N/A]333 (c) Did you include any new assets either in the supplemental material or as a URL? [No]334 (d) Did you discuss whether and how consent was obtained from people whose data you’re us-335 ing/curating? [N/A]336 (e) Did you discuss whether the data you are using/curating contains personally identifiable informa-337 tion or offensive content? [N/A]338 5. If you used crowdsourcing or conducted research with human subjects...339 (a) Did you include the full text of instructions given to participants and screenshots, if applicable?340 [N/A]341 (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB)342 approvals, if applicable? [N/A]343 (c) Did you include the estimated hourly wage paid to participants and the total amount spent on344 participant compensation? [N/A]345
1. What are the strengths and weaknesses of the proposed TAGI-DQN architecture? 2. How does the reviewer assess the clarity and quality of the paper's content, particularly regarding the discussion of TAGI with DQNs? 3. What are some potential concerns or limitations regarding the elimination of hyperparameters by moving to TAGI-DQN? 4. How does the reviewer evaluate the significance and impact of the paper's contributions, especially concerning its experimental demonstration and comparison to relevant baselines? 5. Are there any unanswered questions or areas for further investigation raised by the paper, such as understanding the specific advantages of the TAGI framework applied to a Bayesian form of the DQN?
Summary Of The Paper Review
Summary Of The Paper This paper adapts a recently proposed analytical closed-form Bayesian approach to optimizing Neural Networks to Q-Learning. The proposed TAGI-DQN architecture is optimized without gradient descent. RL policies based on TAGI-DQN are shown to perform well on a collection of OpenAI Gym environments, including a small number of Atari games. Review High-level evaluation I am excited about the potential advances to RL in complex environments that may be ushered in by the possible contributions put forward in this paper. I however do not feel that the work, as currently presented, is wholly ready for publication and perhaps another round of revisions (including the addition of more relevant baselines -- see "Relevance" section below) would do this paper a great service. While reading, I got the sense that the authors didn't want to distract from their intended contributions by dwelling on how TAGI works. This is always a tricky balance in writing papers (how much background is too much?) but I did feel that there were too significant of conceptual+technical gaps introduced in the paper by keeping the discussion too high-level and informal through Section 2. Originality The adaptation of the TAGI framework to RL domains is very intriguing. Analytical solutions to a Bayesian treatment of NNs could greatly improve challenges of sample efficiency within RL. This advantage is demonstrated in the experimental analysis across a variety of domains. Quality and Clarity The major weaknesses of this paper concern the clarity of the formulation and utlization of the TAGI with DQNs. In particular, I found the discussion in Section 2.1 to be far too high-level for introducing a relatively complex new approach. A phrase that ran through my head while reading was “show don’t just tell”. The paper is quite light on technical details and it wasn’t due to space constraints as there was nearly 2 pages of unused space remaining in the submitted version of the paper. I feel that a more formal approach to how the two steps of TAGI interact with individual parameter distributions and their connections through hidden layers is necessary and would greatly improve the paper. This would make the discussion in Sections 2.2-2.3 much easier to follow and understand. This is especially true in the discussion starting on line 124 when inference through the TAGI-DQN is described. Particularly egregious in this lack of clarity is the unmotivated and sudden jump at the end of Section 2.1 to RL. It’s not apparent how the discussion around optimizing parameters via TAGI with a single example or a mini-batch directly relates to learning sequentially with episodes of observations and so on. Additionally, without any formal framing, there’s an extensive burden placed on the reader to know exactly what parts of the following sections correspond to the steps within TAGI. With the connection seemingly drawn to the expected value of selecting an action a t when conditioned on some state s t , it is also somewhat surprising that the family of Distributional RL (Bellemare, et al; 2017 ICML) approaches went uncited. I acknowledge that Dist. RL does not attempt to pass off as a Bayesian approach but the core component of these approaches is in the implicit distribution formed around the expected value of rewards conditioned on action selection. In line 100, was the word “discrete” intended in the place of “categorical”? It’s not clear why n-step returns are preferred? Typically this introduces significant variance in DQN based approaches. Is there something particular about TAGI that necessitates the use of multi-step returns? The elimination of hyperparameters by moving to TAGI-DQN, described on line 130 seems to only consider those associated with SGD optimization (learning rate, optimization hyperparams, minibatch size, etc…). While this may require several components to tune, the architecture construction and size of replay buffer can have an outsized impact as well. While it’s nice to potentially not worry about hyperparameters associated with SGD, there are still significant choices made in the design of a model. What’s the rationale for decaying the standard deviation σ V ? Is it to account for drifting biases as the parameters of the model are learned? Is this a “hack” that assumes improved model confidence over time? Is this perhaps too optimistic? What happens if an observed state in a later episode falls out of the assumed distribution? Is TAGI-DQN robust to outliers? These are important questions with regards to avoiding the Q-function approximation overfitting too early. Again, it’s a shame that so little technical content is provided in Section 2. The primary contribution of this paper is the extension of TAGI to RL domains and architectures. Without the details of how TAGI actually works, line 16 in Algorithm 1 becomes an intellectual dead-end within this paper as written. In the OpenAI Gym experiments, the initialization of σ V is not mentioned. Additionally, it is not mentioned whether the comparison with the OpenAI default DQN is especially fair or not. For instance, what is the architecture of the DQN model? Is it the same as the TAGI network? Does TAGI use the same amount of computation or more in it’s optimization steps? Are the batch sizes consistent between the two implementations? For the experiments on Atari, it’s unclear whether n is set to 128 or whether that horizon is for the decay of the standard deviation. While it’s impressive that TAGI outperforms DQN on several of these Atari games, the discussion or analysis about why it fails on some is missing. It would be nice to be able to develop some insight into what types of domains TAGI may not be appropriate for. Are there some complexities in the state spaces of Beam Rider or Space Invaders that make it harder for the solution approach? Do the representations learned by the CNN encounter (or otherwise get stuck in) local minima? I find it interesting that in both of these games, there is a point after which the baseline DQN solution diverges from the TAGI-DQN solution in performance, indicating that it’s learned something that wasn’t accessible to the TAGI network. Significance If properly evaluated the contributions promised through a Bayesian method optimized via analytical means are potentially of great significance. However, I found the technical discussion and experimental demonstration to be mildly unconvincing (partially due to a lack of clarity as discussed above). Clearly, the performance of the TAGI-DQN is demonstrated to be better than a standard DQN model. However, the lack of comparison to comparable approximate Bayesian RL methods or appropriate ablations is disappointing. Without comparison to relevant baselines, it’s hard to feel confident about the proposed analytical solution from a practical standpoint. It’s possibly assumed that TAGI-DQN is automatically more efficient/effective due to the closed-form solution. I say this as a means to rigorously understand the specific advantages borne by the TAGI framework applied to a Bayesian form of the DQN. In line 144, a “few” BNN approaches that have been applied to large RL domains are alluded to. It would be preferable if these papers are explicitly cited and discussed so as to provide greater context into how TAGI differs (aside from only being an analytical closed-form solution method, if there is anything else?). Is the way the network parameter distributions are factorized to make inference tractable in a similar way to any of these BNN approaches, what about variational inference types of solution methods (such as VariBad--Zintgraf, et al; 2020 ICLR)? How similar is that to TAGI? Answers to these types of questions that would greatly improve the framing of the results that are shown in the following sections of the paper. Additional references Bellemare, Marc G., Will Dabney, and Rémi Munos. "A distributional perspective on reinforcement learning." International Conference on Machine Learning. PMLR, 2017. Zintgraf, L., et al. "VariBAD: a very good method for Bayes-adaptive deep RL via meta-learning." Proceedings of ICLR 2020 (2020).
NIPS
Title Analytically Tractable Bayesian Deep Q-Learning Abstract Reinforcement learning (RL) has gained increasing interest since the demonstration 1 it was able to reach human performance on video game benchmarks using deep 2 Q-learning (DQN). The current consensus for training neural networks on such 3 complex environments is to rely on gradient-based optimization. Although alterna4 tive Bayesian deep learning methods exist, most of them still rely on gradient-based 5 optimization, and they typically do not scale on benchmarks such as the Atari game 6 environment. Moreover none of these approaches allow performing the analytical 7 inference for the weights and biases defining the neural network. In this paper, we 8 present how we can adapt the temporal difference Q-learning framework to make 9 it compatible with the tractable approximate Gaussian inference (TAGI), which 10 allows learning the parameters of a neural network using a closed-form analytical 11 method. Throughout the experiments with onand off-policy reinforcement learn12 ing approaches, we demonstrate that TAGI can reach a performance comparable to 13 backpropagation-trained networks while using fewer hyperparameters, and without 14 relying on gradient-based optimization. 15 N/A Reinforcement learning (RL) has gained increasing interest since the demonstration1 it was able to reach human performance on video game benchmarks using deep2 Q-learning (DQN). The current consensus for training neural networks on such3 complex environments is to rely on gradient-based optimization. Although alterna-4 tive Bayesian deep learning methods exist, most of them still rely on gradient-based5 optimization, and they typically do not scale on benchmarks such as the Atari game6 environment. Moreover none of these approaches allow performing the analytical7 inference for the weights and biases defining the neural network. In this paper, we8 present how we can adapt the temporal difference Q-learning framework to make9 it compatible with the tractable approximate Gaussian inference (TAGI), which10 allows learning the parameters of a neural network using a closed-form analytical11 method. Throughout the experiments with on- and off-policy reinforcement learn-12 ing approaches, we demonstrate that TAGI can reach a performance comparable to13 backpropagation-trained networks while using fewer hyperparameters, and without14 relying on gradient-based optimization.15 1 Introduction16 Reinforcement learning (RL) has gained increasing interest since the demonstration it was able to17 reach human performance on video game benchmarks using deep Q-learning (DQN) [17, 26]. Deep18 RL methods typically require an explicit definition of an exploration-exploitation function in order to19 compromise between using the current policy and exploring the potential of new actions. Such an20 issue can be mitigated by opting for a Bayesian approach where the selection of the optimal action to21 follow is based on Thompson sampling [23]. Bayesian deep learning methods based on variational22 inference [12, 10, 5, 14, 20, 29], Monte-Carlo dropout [8], or Hamiltonian Monte-Carlo sampling23 [18] have shown to perform well on regression and classification benchmarks, despite being generally24 computationally more demanding than their deterministic counterparts. Note that none of these25 approaches allow performing the analytical inference for the weights and biases defining the neural26 network. Goulet et al. [9] recently proposed the tractable approximate Gaussian inference (TAGI)27 method which allows learning the parameters of a neural network using a closed-form analytical28 method. For convolutional architectures applied on classification benchmarks, this approach was29 shown to exceed the performance of other Bayesian and deterministic approaches based on gradient30 backpropagation, and to do so while requiring a smaller number of training epochs [19].31 In this paper, we present how can we adapt the temporal difference Q-learning framework [24, 28] to32 make it compatible with TAGI. Section 2 first reviews the theory behind TAGI and the expected value33 formulation through the Bellman’s Equation. Then, we present how the action-value function can34 be learned using TAGI. Section 3 presents the related work associated with Bayesian reinforcement35 learning, and Section 4 compares the performance of a simple TAGI-DQN architecture with the one36 obtained for its backpropagation-trained counterpart.37 Submitted to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Do not distribute. 2 TAGI-DQN Formulation38 This section presents how to adapt the DQN frameworks in order to make them compatible with39 analytical inference. First, Section 2.1 reviews the fundamental theory behind TAGI, and Section 2.140 reviews the concept of long-term expected value through the Bellman’s equation [25]. Then, Section41 2.3 presents how to make the Q-learning formulation [28] compatible with TAGI.42 2.1 Tractable Approximate Gaussian Inference43 TAGI [9] relies on two main steps; forward uncertainty propagation and backward update. The44 first forward uncertainty propagation step is intended to build the joint prior between the neural45 network parameters and the hidden states. This operation is made by propagating the uncertainty46 from the model parameters and the input layer through the neural network. TAGI relies on the47 Gaussian assumption for the prior of parameters as well as for the variables in the input layer. In order48 to maintain the analytical tractability of the forward step, we rely on the Gaussian multiplicative49 approximation (GMA) which consists in approximating the product of two Gaussians by a Gaussian50 random variable whose moments match those calculated exactly using moment generating functions.51 In order to propagate uncertainty through non-linear activation functions, a second approximation52 made by locally linearizing these function at the expected value of the hidden unit being activated.53 Although this linearization procedure may seems to be a crude approximation, it has been shown to54 match or exceeds the state-of-the-art performance on fully-connected neural networks (FNN) [9],55 as well as convolutional neural networks (CNN) and generative adversarial networks [19]. TAGI56 succeeds in maintaining a linear computational complexity for the forward steps, (1) by assuming57 a diagonal covariance for all parameters in the network and for all the hidden units within a same58 layer, and (2) by adopting a layer-wise approach where the joint prior is only computed and stored for59 the hidden units on pairs of successive hidden layers, as well as the hidden units within a layer and60 the parameters connecting into it. This layer-wise approach is allowed by the inherent conditional61 independence that is built-in feed-forward neural network architectures.62 The second backward update-step consists in performing layer-wise recursive Bayesian inference63 which goes from hidden-layer to hidden-layer and from hidden-layer to the parameters connecting64 into it. Given the Gaussian approximation for the joint prior throughout the network, the inference65 can be done analytically while still maintaining a linear computational complexity with respect to the66 number of weight parameters in the network. TAGI allows inferring the diagonal posterior knowledge67 for weights and bias parameters, either using one observation at a time, or using mini-batches of68 data. As we will show in the next sections, this online learning capacity is best suited for RL69 problems where we experience episodes sequentially and where we need to define a tradeoff between70 exploration and exploitation, as a function of our knowledge of the expected value associated with71 being in a state and taking an action.72 2.2 Expected Value and Bellman’s Equation73 We define r(s, a, s′) as the reward for being in a state s ∈ RS, taking an action a ∈ A =74 {a1, a2, · · · aA}, and ending in a state s′ ∈ RS. For simplicity, we use the short-form notation75 for the reward r(s, a, s′) ≡ r(s) in order to define the value as the infinite sum of discounted rewards76 77 v(s) = ∞∑ k=0 γkr(st+k). (1) As we do not know what will be the future states st+k for k > 0, we need to consider them as random78 variables (St+k), so that the value V (st) becomes a random variable as well,79 V (st) = r(st) + ∞∑ k=1 γkr(St+k). (2) Rational decisions regarding which action to take among the set A is based the maximization of the80 expected value as defined by the action-value function81 q(st, at) = µV ≡ E[V (st, at, π)] = r(st) + E [ ∞∑ k=1 γkr(St+k) ] , (3) where it is assumed that at each time t, the agent takes the action defined in the policy π. In the case82 of episode-based learning where the agent interacts with the environment, we assume we know the83 tuple of states st and st+1, so that we can redefine the value as84 V (st, at) = r(st) + γ ( r(st+1) + ∞∑ k=1 γkr(St+1+k) ) = r(st) + γV (st+1, at+1). (4) Assuming that the value V ∼ N (v;µV , σ2V ) in Equations 2 and 4 is described by Gaussian random85 variables, we can reparameterize these equations as the sum of the expected value q(s, a) and a86 zero-mean Gaussian random variable E ∼ N ( ; 0, 1), so that87 V (s, a) = q(s, a) + σV E , (5) where the variance σ2V and E are assumed here to be independent of s and a. Although in a more88 general framework this assumption could be relaxed, such an heteroscedastic variance term is outside89 from the scope of this paper. Using this reparameterization, we can write Equation 4 as the discounted90 difference between the expected values of two subsequent states91 q(st, at) = r(st) + γq(st+1, at+1)− σVtEt + γσVt+1Et+1 = r(st) + γq(st+1, at+1) + σV E . (6) Note that in Equation 6, σVt and γσVt+1 can be combined in a single standard deviation parameters92 σV with the assumption that Ei ⊥ Ej ,∀i 6= j.93 In the case where at a time t, we want to update the Q-values encoded in the neural net only after94 observing n-step returns [15], we can reformulate the observation equation so that95 q(st, at) = n−t−1∑ i=0 γir(st+i) + γ n−tq(sn, an) + σV Et,∀t = {1, 2, · · · , n− 1}. (7) Note that in the application of Equation 7, we employ the simplifying assumption that Et ⊥ Et+i,∀i 6=96 0, as Equation 6 already makes simplifying assumptions for the independence of σ2V and E . Note97 that in a more general framework, this assumption could be relaxed. An example of n-step returns is98 presented in the the algorithm displayed in §1 from the supplementary material.99 The following subsections will present, for the case of categorical actions, how to model the deter-100 ministic action-value function q(s, a) using a neural network.101 2.3 TAGI Deep Q-learning for Categorical Actions102 Suppose we represent the environment’s state at a time t and t+ 1 by {s, s′}, and the expected value103 for each of the A possible actions a ∈ A by the vector q ∈ RA. In that context, the role of the neural104 network is to model the relationships between {s, a} and q. Figure 1a presents a directed acyclic105 graph (DAG) describing the interconnectivity in such a neural network, where red nodes denote state106 variables, green nodes are vectors of hidden units z, the blue box is a compact representation for107 the structure of a convolutional neural network, and where gray arrows represent the weights and108 bias θ connecting the different hidden layers. Note that unlike other gray arrows, the red ones in109 (b) are not directed arcs representing dependencies, but they simply outline the flow of information110 that takes place during the inference step. For simplification purposes, the convolutional operations111 are omitted and all regrouped under the CNN box [19]. In order to learn the parameters θ of such a112 network, we need to expand the graph from Figure 1a to include the reward r, the error term σV ,113 and q′, the q-values of the time step t+ 1. This configuration is presented in Figure 1b where the114 nodes that have been doubled represent the states s and s′ which are both evaluated in a network115 sharing the same parameters. When applying Equation 6, q-values corresponding to a specific action116 can be selected using a vector hi ∈ {0, 1}A having a single non-zero value for the i-th component117 identifying which action was taken at a time t so that118 qi = [q]i = h ᵀ i q. (8) During the network’s training, analogously to Thompson sampling [23], the vector h′i ∈ {0, 1}A is119 defined such that the i-th non-zero value corresponds to the index of the largest value among q′, a120 s CNN z(1) z(2) qθ(c0) θ(0) θ(1) θ(q) (a) Neural network DAG for modelling the action-value function q vector of realizations from the neural network’s posterior predictive outputQ ∼ N (q′;µQ|D,ΣQ|D).121 Because of the Gaussian assumptions in TAGI, this posterior predictive is readily available from the122 forward uncertainty propagation step, as outlined in §2.1.123 The red arrows in Figure 1b outline the flow of information during the inference procedure. The first124 step consists in inferring q using the relationships defined in either Equation 6 or 7. As this is a linear125 equation involving Gaussian random variables, the inference is analytically tractable. From there, one126 can follow the same layer-wise recursive procedure proposed by Goulet et al. [9] in order to learn127 the weights and biases in θ. With the exclusion of the standard hyperparameters related to network128 architecture, batch size, buffer size or the discount factor, this TAGI-DQN framework only involves a129 single hyperparameter, σV , the standard deviation for the value function. Note that when using CNNs130 with TAGI, Nguyen and Goulet [19] recommended using a decay function for the standard deviation131 of the observation noise so that at after seing e batches of n-steps,132 σeV = max(σ min V , η · σV )e−1. (9) The model in Equation 9 has three hyperparameters, the minimal noise parameter σminV , the decay133 factor η and the initial noise parameter σV . As it was shown by Nguyen and Goulet [19] for CNNs134 and how we show in §4 for RL problems, TAGI’s performance is robust towards the selection of these135 hyperparameters.136 A comparison of implementation between TAGI and backpropagation on deep Q-network with137 experience replay [17] is shown in Figure 2. A practical implementation of n-step TAGI deep138 Q-learning is presented in Algorithm 1 from the supplementary material.139 3 Related Works140 Over the last decades, several approximate methods have been proposed in order to allow for Bayesian141 neural networks [18, 12, 10, 5, 14, 20, 29, 8] with various degree of approximations. Although some142 these methods have shown to be capable of tackling classification tasks on datasets such ImageNet143 [20], few of them have been applied on large-scale RL benchmark problems. The key idea behind144 using Bayesian methods for reinforcement learning is to consider the uncertainty associated with145 Q-functions in order to identify a tradeoff between exploring the performance of possible actions and146 exploiting the current optimal policy [25]. This typically takes the form of performing Thompson147 sampling [23] rather than relying on heuristics such as -greedy.148 For instance, MC dropout [8] was introduced has a method intrinsically suited for reinforcement149 learning. Nevertheless, five years after its inception, the approach has not yet been reliably scaled150 to more advanced benchmarks such as the Atari game environment. The same applies to Bayes-151 by-backprop [5] which was recently applied to simple RL problems [13], and which has not yet152 been applied to more challenging environments requiring convolutional networks. On the other153 hand, Bayesian neural networks relying on sampling methods such as Hamiltonian Monte-Carlo154 [18] are typically computationally demanding to be scaled to RL problems involving such a complex155 environment.156 Although mainstream methods related to Bayesian neural networks have seldom been applied to157 complex RL problems, several research teams have worked on alternative approaches in order to158 allow performing Thompson sampling. For instance, Azizzadenesheli et al. [4] have employed a deep159 Q-network where the output layer relies on Bayesian linear regression. This approach was shown160 to be outperforming its deterministic counterparts on Atari games. Another approach by Osband et161 al. [21] employs bootstrapped deep Q-networks with multiple network heads in order to represent162 the uncertainty in the Q-functions. This approach was also shown to scale to Atari games while163 presenting an improved performance in comparison with deterministic deep Q-networks. Finally,164 Wang and Zhou [27] have tackled the same problem, but this time by modelling the variability in the165 Q-functions through a latent space learned using variational inference. Despite its good performance166 on the benchmarks tested, it did not allowed to be scaled to the Atari game environment.167 The TAGI deep Q-network presented in th is paper is the first demonstration that an analytically168 tractable inference approach for Bayesian neural networks can be scaled to a problem as challenging169 as the Atari game environment.170 4 Benchmarks171 This section compares the performance of TAGI with backpropagation-based standard implementa-172 tions on off- and on-policy deep RL. For the off-policy RL, both TAGI-based and backpropagation-173 based RL approaches are applied to deep Q-learning with experience replay (see Algorithm 1&2)174 for the lunar lander and cart pole environments. For the on-policy RL, TAGI is applied to the n-step175 Q-learning algorithm and is compared with its backpropagation-based counterpart [15]. We perform176 the comparison for five Atari games including Beamrider, Breakout, Pong, Qbert, and Space Invaders.177 Note that these five games are commonly selected for tuning hyperparameters for the entire Atari178 games [15, 16]. All benchmark environments are taken from the OpenAI Gym [6].179 4.1 Experimental Setup180 In the first experiments with off-policy RL, we use a fully-connected multilayer perceptron (MLP)181 with two hidden layers of 256 units for the lunar lander environment, and with one hidden layer of182 64 units for the cart pole environment. In these experiments, there is no need for input processing183 nor for reward normalization. Note that unlike for the deterministic Q-network, TAGI does not use a184 target Q-network for ensuring the stability during training and allows eliminating the hyperparameter185 related to the target update frequency. For the deep Q-network trained with backpropagation, we186 employ the pre-tuned implementation of OpenAI baselines [7] with all hyperparameters set to the187 default values.188 For the Atari experiments with on-policy RL, we use the same input processing and model architecture189 as Mnih et al. [15]. The Q-network uses two convolutional layers (16-32) and a full-connected MLP190 of 256 units. TAGI n-step Q-learning only uses a single network to represent the value function for191 each action, and relies on a single learning agent. The reason behind this choice is that TAGI current192 main library is only available on Matlab which does not support running a Python multiprocessing193 module such as the OpenAI gym. In the context of TAGI, we use an horizon of 128 steps and as194 recommended by Andrychowicz et al. [3] and following practical implementation details [1, 2],195 each return in n-step Q-learning algorithm is normalized by subtracting the average return from196 the current n-steps and then dividing by the empirical standard deviation from the set of n returns.197 The standard deviation for the value function, (σV ), is initialized at 2. σV is decayed each 128198 steps with a factor η = 0.9999. The minimal standard deviation for the value function σminV = 0.3.199 These hyperparameters values were not grid-searched but simply adapted to the scale of the problems200 and are kept constant for all experiments. The complete details of the network architecture and201 hyperparameters are provided in the supplementary material.202 4.2 Results203 For the first set of experiments using off-policy RL, Figure 3 presents the average reward over204 100 episodes for three runs for the lunar lander and cart pole environment. The TAGI-based deep205 Q-learning with experience replay shows a faster and more stable learning than the one relying on206 backpropagation, while not requiring a target network. 207 Table 1 shows that the average reward over the last 100 episodes obtained using TAGI are greater208 than the one obtained using backpropagation. 209 Figure 4 compares the average reward over 100 episodes for three runs obtained for TAGI, with210 the results from Mnih et al. [15] for the second set of experiments on Atari games. Note that all211 results presented were obtained for a single agent, and that the results for the backpropagation-trained212 networks are only reported at the end of each epoch. 213 Results show that TAGI outperforms the results from the original n-step Q-learning algorithm trained214 with backpropagation [15] on Breakout, Pong, and Qbert, while underperforming on Beam Rider215 and Space Invaders. The average training time of TAGI for an Atari game is approximately 13 hours216 on GPU calculations benchmarked on a 4-core-intel desktop of 32GB of RAM with a NVIDIA217 GTX 1080 Ti GPU. The training speed of TAGI for the experiment of the off-policy deep RL is218 approximately three times slower on CPU calculations than the backpropagation-trained counterpart.219 The reason behind this slower training time is because of its intrinsically different inference engine, so220 that TAGI’s implementation is not compatible with existing libraries such as TensorFlow or Pytorch.221 TAGI’s library development is still ongoing and it is not yet fully optimized for computational222 efficiency. Overall, these results for on- and off policy RL approaches confirm that TAGI can be223 applied to large scale problems such as deep Q-learning.224 5 Discussion225 Although the performance of TAGI does not systematically outperform its backpropagation-based226 counterpart, it requires fewer hyperparameters (see §3 in supplementary material). This advantage227 is one of the key aspects for improving the generalization and reducing the computational cost of228 the hyperparameter tuning process which are the key challenges in current state of deep RL [11].229 For instance, in this paper, the TAGI’s hyperparameters relating to the standard deviation of value230 function (σV ) are kept constant across all experiments. Moreover, since these hyperparameters231 were not subject to grid-search in order to optimize the performance, the results obtained here232 are representative of what a user should obtain by simply adapting the hyperparameters to fit the233 specificities and scale of the environment at hand.234 More advanced RL approaches such as advanced actor critic (A2C) [15] and proximal policy opti-235 mization (PPO) [22] employ two-networks architectures in which one network is used to approximate236 a value function and other is employed to encode the policy. The current TAGI-RL framework is237 not yet able to handle such architectures because training a policy network involves an optimization238 problem for the selection of the optimal action. Backpropagation-based approach currently rely on239 gradient optimization to perform this task, while TAGI will require developing alternative approaches240 in order to maintain the analytical tractability without relying on gradient-based optimization.241 6 Conclusion242 This paper presents how to adapt TAGI to deep Q-learning; Throughout the experiments, we demon-243 strated that TAGI could reach a performance comparable to backpropagation-trained networks while244 using fewer hyperparameters. These results challenge the common belief that for large scale problems245 such as the Atari environment, neural networks can only be trained by relying on gradient backpropa-246 gation. We have shown here that this current paradigm is no longer the only alternative as TAGI has a247 linear computational complexity and can be used to learn the parameters complex networks in an248 analytically tractable manner, without relying on gradient-based optimization.249 References250 [1] Pytorch examples for reinforce algorithm. https://github.com/pytorch/examples/blob/master/251 reinforcement_learning/reinforce.py, 2019.252 [2] Pytorch examples for actor crtic algorithm. https://github.com/pytorch/examples/blob/master/253 reinforcement_learning/actor_critic.py, 2020.254 [3] M. Andrychowicz, A. Raichuk, P. Stańczyk, M. Orsini, S. Girgin, R. Marinier, L. Hussenot, M. Geist,255 O. Pietquin, M. Michalski, S. Gelly, and O. Bachem. What matters for on-policy deep actor-critic methods?256 a large-scale study. In International Conference on Learning Representations, 2021.257 [4] K. Azizzadenesheli, E. Brunskill, and A. Anandkumar. Efficient exploration through Bayesian deep258 q-networks. In IEEE Information Theory and Applications Workshop, pages 1–9, 2018.259 [5] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural networks. arXiv260 preprint arXiv:1505.05424, 2015.261 [6] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym.262 arXiv preprint arXiv:1606.01540, 2016.263 [7] P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, Y. Wu, and264 P. Zhokhov. Openai baselines. https://github.com/openai/baselines, 2017.265 [8] Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep266 learning. In ICML proceedings, pages 1050–1059, 2016.267 [9] J-A. Goulet, L.H. Nguyen, and S. Amiri. Tractable approximate Gaussian inference for Bayesian neural268 networks. arXiv preprint, 2020.269 [10] J. M. Hernández-Lobato and R. Adams. Probabilistic backpropagation for scalable learning of bayesian270 neural networks. In International Conference on Machine Learning, pages 1861–1869, 2015.271 [11] A. Irpan. Deep reinforcement learning doesn’t work yet. https://www.alexirpan.com/2018/02/14/272 rl-hard.html, 2018.273 [12] D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization trick. In274 C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information275 Processing Systems, volume 28, 2015.276 [13] Z. Lipton, X. Li, J. Gao, L. Li, F. Ahmed, and L. Deng. Bbq-networks: Efficient exploration in deep277 reinforcement learning for task-oriented dialogue systems. In Proceedings of the AAAI Conference on278 Artificial Intelligence, volume 32, 2018.279 [14] C. Louizos and M. Welling. Structured and efficient variational deep learning with matrix Gaussian280 posteriors. In ICML proceedings, pages 1708–1716, 2016.281 [15] V. Mnih, Adria P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu.282 Asynchronous methods for deep reinforcement learning. In ICML proceedings, pages 1928–1937. PMLR,283 2016.284 [16] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing285 atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, December 2013.286 [17] V. Mnih, K. Kavukcuoglu, D. Silver, A.A. Rusu, J. Veness, M.G. Bellemare, A. Graves, M. Riedmiller,287 A.K. Fidjeland, and G Ostrovski. Human-level control through deep reinforcement learning. nature,288 518(7540):529–533, 2015.289 [18] R. M. Neal. Bayesian learning for neural networks. PhD thesis, University of Toronto, 1995.290 [19] L. H. Nguyen and J-A. Goulet. Analytically tractable inference in deep neural networks. arXiv preprint,291 2021.292 [20] K. Osawa, S. Swaroop, A. Jain, R. Eschenhagen, R. E. Turner, R. Yokota, and M. E. Khan. Practical deep293 learning with Bayesian principles. In Advances in Neural Information Processing Systems proceedings,294 2019.295 [21] I. Osband, C. Blundell, A. Pritzel, and Benjamin V. Roy. Deep exploration via bootstrapped dqn. In296 NEURIPS proceedings, pages 4033–4041, 2016.297 [22] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms.298 arXiv preprint arXiv:1707.06347, 2017.299 [23] M. Strens. A Bayesian framework for reinforcement learning. In ICML proceedings, pages 943–950, 2000.300 [24] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3(1):9–44,301 1988.302 [25] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT Press, 2nd edition, 2018.303 [26] H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. In Proceedings304 of the AAAI Conference on Artificial Intelligence, volume 30, 2016.305 [27] Z. Wang and M. Zhou. Thompson sampling via local uncertainty. In ICML proceedings, volume 119,306 pages 10115–10125, 13–18 Jul 2020.307 [28] C. J. Watkins and P. Dayan. Q-learning. Machine learning, 8(3-4):279–292, 1992.308 [29] A. Wu, S. Nowozin, E. Meeds, R. E. Turner, J. M. Hernández-Lobato, and A. L. Gaunt. Deterministic309 variational inference for robust Bayesian neural networks. In ICLR proceedings, 2019.310 Checklist311 1. For all authors...312 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contribu-313 tions and scope? [Yes]314 (b) Did you describe the limitations of your work? [Yes]315 (c) Did you discuss any potential negative societal impacts of your work? [N/A]316 (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]317 2. If you are including theoretical results...318 (a) Did you state the full set of assumptions of all theoretical results? [N/A]319 (b) Did you include complete proofs of all theoretical results? [N/A]320 3. If you ran experiments...321 (a) Did you include the code, data, and instructions needed to reproduce the main experimental322 results (either in the supplemental material or as a URL)? [No] The code will be made available323 upon the publication of the paper324 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?325 [Yes]326 (c) Did you report error bars (e.g., with respect to the random seed after running experiments327 multiple times)? [Yes]328 (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs,329 internal cluster, or cloud provider)? [Yes]330 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...331 (a) If your work uses existing assets, did you cite the creators? [Yes]332 (b) Did you mention the license of the assets? [N/A]333 (c) Did you include any new assets either in the supplemental material or as a URL? [No]334 (d) Did you discuss whether and how consent was obtained from people whose data you’re us-335 ing/curating? [N/A]336 (e) Did you discuss whether the data you are using/curating contains personally identifiable informa-337 tion or offensive content? [N/A]338 5. If you used crowdsourcing or conducted research with human subjects...339 (a) Did you include the full text of instructions given to participants and screenshots, if applicable?340 [N/A]341 (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB)342 approvals, if applicable? [N/A]343 (c) Did you include the estimated hourly wage paid to participants and the total amount spent on344 participant compensation? [N/A]345
1. What is the main contribution of the paper regarding deep q-learning? 2. What are the strengths of the proposed approach, particularly in its exploration-exploitation tradeoff handling? 3. What are the weaknesses of the paper, both theoretically and empirically? 4. How does the reviewer assess the choice of not using a target network for TAGI-DQN? 5. What ablation studies would be beneficial for understanding the effects of design choices in TAGI? 6. How does the reviewer view the paper's comparison to baselines, specifically in the Atari games context? 7. Are there any suggestions for improving the paper's results or experimental design?
Summary Of The Paper Review
Summary Of The Paper This paper proposes applying TAGI (Goulet et al, 2019), a framework for approximate Bayesian inference in deep neural networks, to deep q-learning. Using assumptions of Gaussianity on various distributions, TAGI infers both the neural network parameters and unit activations, without needing gradient descent. Because all distributions are assumed to be Gaussian, the means and variances can be updated analytically with linear computational complexity. TAGI is essentially used to infer the correct Q-value function, and the approach is applied to lunar lander, cartpole, and several Atari games. Review Overall, I liked the idea of introducing Bayesian approaches to deep RL is an interesting, particularly as it offers a more principled way of handling the exploration-exploitation tradeoff compared to epsilon-greedy exploration. However, I do have several reservations on both the theoretical and empirical aspects on this work, and for the reasons given below, I cannot recommend this paper for acceptance as this stage. Strengths: introducing Bayesian inference into deep RL allows for the possibility for a more principled approach to exploration-exploitation via Thompson sampling; exploration is driven by greater posterior uncertainty, while a lower uncertainty will drive greater exploitation compared to alternative approaches to Bayesian deep learning, TAGI's inference can be done analytically with linear computational complexity with respect to the number of network parameters Theoretical weaknesses: the authors have chosen not to not use a target network for TAGI-DQN. This is rather controversial as it changes the training target. DQN updates the Q-network towards a fixed TD target. For the learning target to be fixed, it needs a fixed bootstrap, which therefore requires a fixed target network. By not using a fixed target network, TAGI-DQN attempts to infer the network parameters that minimize a different learning objective, what Sutton & Barto (2ed, Ch 11.5) call the mean squared TD error, and this is known to converge to an incorrect result. Empirical weaknesses: the Atari results were ran for 40M frames (only a fifth of the 200M that is more standard). My concern is that this is too early to tell what the asymptotic performance of these algorithms will be like. For instance, it'd be nice to know whether any of TAGI's early leads over the backprop algorithm will be stable over longer horizons. It'd also be nice to know whether TAGI training eventually converges to some asymptotic performance or whether instabilities will emerge over longer training horizons. I think this work could also benefit from some ablation studies. For instance, the authors chose not to include a target network for TAGI: as this is a striking design decision, it'd be helpful to see what the effects of it is via an ablation study. Presumably, TAGI explored by Thompson sampling (it was not made explicit in the paper), and it'd be interesting to see how this compares to a more traditional epsilon-greedy policy. Finally, it'd be helpful to also examine TAGI in a policy evaluation setting to see whether it can converge onto a correct, known policy. ---Review update--- I thank the authors for their response to my questions during the discussion period. In light of their comments, my previous theoretical concerns have been addressed, and I have raised my score to 5 to reflect that. But I still feel that the paper's baselines were rather thin, and multiple suggestions for improvement have been offered by myself and the other reviewers. I'd like to mention the atari baselines in particular. The authors' current choice is unusual and never used beyond the original paper. The authors compare against n-step Q-learning of Mnih et al (2016), but the 1-thread version of it. This baseline is never used simply because it performs poorly -- it has no replay buffer or parallel actors to make up for it. I appreciate the authors' intention was to demonstrate an improvement in the online learning setting. However, I think it'd still be helpful to compare against a more widely used atari baseline as well, such as DQN and/or the multi-actor version of n-step Q-learning. I also believe it'd be helpful if the authors could provide results on more games, as the current sample size is small -- the authors show TAGI is better in only 3 out of the 5 games they ran. Based on this, it is difficult to conclude whether the RL practitioner should expect TAGI to do better than gradient descent if she were considering training an agent on a new task.
NIPS
Title VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization Abstract Most state-of-the-art Graph Neural Networks (GNNs) can be defined as a form of graph convolution which can be realized by message passing between direct neighbors or beyond. To scale such GNNs to large graphs, various neighbor-, layer-, or subgraph-sampling techniques are proposed to alleviate the “neighbor explosion” problem by considering only a small subset of messages passed to the nodes in a mini-batch. However, sampling-based methods are difficult to apply to GNNs that utilize many-hops-away or global context each layer, show unstable performance for different tasks and datasets, and do not speed up model inference. We propose a principled and fundamentally different approach, VQ-GNN, a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance. In contrast to sampling-based techniques, our approach can effectively preserve all the messages passed to a mini-batch of nodes by learning and updating a small number of quantized reference vectors of global node representations, using VQ within each GNN layer. Our framework avoids the “neighbor explosion” problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix. We show that such a compact low-rank version of the gigantic convolution matrix is sufficient both theoretically and experimentally. In company with VQ, we design a novel approximated message passing algorithm and a nontrivial back-propagation rule for our framework. Experiments on various types of GNN backbones demonstrate the scalability and competitive performance of our framework on large-graph node classification and link prediction benchmarks. 1 Introduction The rise of Graph Neural Networks (GNNs) has brought the modeling of complex graph data into a new era. Using message-passing, GNNs iteratively share information between neighbors in a graph to make predictions of node labels, edge labels, or graph-level properties. A number of powerful GNN architectures [1–4] have been widely applied to solve down-stream tasks such as recommendation, social analysis, visual recognition, etc. With the soaring size of realistic graph datasets and the industrial need to model them efficiently, GNNs are hindered by a scalability problem. An L-layer GNN aggregates information from all L-hop neighbors, and standard training routines require these neighbors to all lie on the GPU at once. This prohibits full-batch training when facing a graph with millions of nodes [5]. ∗Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). A number of sampling-based methods have been proposed to accommodate large graphs with limited GPU resources. These techniques can be broadly classified into three categories: (1) Neighborsampling methods [2, 6] sample a fixed-number of neighbors for each node; (2) Layer-sampling methods [7, 8] sample nodes in each layer independently with a constant sample size; (3) Subgraphsampling methods [9, 10] sample a subgraph for each mini-batch and perform forward and backpropagation on the same subgraph across all layers. Although these sampling-based methods may significantly speed up the training time of GNNs, they suffer from the following three major drawbacks: (1) At inference phase, sampling methods require all the neighbors to draw non-stochastic predictions, resulting in expensive predictions if the full graph cannot be fit on the inference device; (2) As reported in [5] and in Section 6, state-of-the-art sampling-baselines fail to achieve satisfactory results consistently across various tasks and datasets; (3) Sampling-based methods cannot be universally applied to GNNs that utilize many-hop or global context in each layer, which hinders the application of more powerful GNNs to large graphs. This paper presents VQ-GNN, a GNN framework using vector quantization to scale most state-ofthe-art GNNs to large graphs through a principled and fundamentally different approach compared with the sampling-based methods. We explore the idea of using vector quantization (VQ) as a means of dimensionality reduction to learn and update a small number of quantized reference vectors (codewords) of global node representations. In VQ-GNN, mini-batch message passing in each GNN layer is approximated by a VQ codebook update and an approximated form of message passing between the mini-batch of nodes and codewords; see Fig. 1. Our approach avoids the “neighbor explosion” problem and enables mini-batch training and inference of GNNs. In contrast to samplingbased techniques, VQ-GNN can effectively preserve all the messages passed to a mini-batch of nodes. We theoretically and experimentally show that our approach is efficient in terms of memory usage, training/inference time, and convergence speed. Experiments on various GNN backbones demonstrate the competitive performance of our framework compared with the full-graph training baseline and sampling-based scalable algorithms. Paper organization. The remainder of this paper is organized as follows. Section 2 summarizes GNNs that can be re-formulated into a common framework of graph convolution. Section 3 defines the scalability challenge of GNNs and shows that dimensionality reduction is a potential solution. In Section 4, we describe our approach, VQ-GNN, from theoretical framework to algorithm design and explain why it solves the scalability issue of most GNNs. Section 5 compares our approach to the sampling-based methods. Section 6 presents a series of experiments that validate the efficiency, robustness, and universality of VQ-GNN. Finally, Section 7 concludes this paper with a summary of limitations and broader impacts. 2 Preliminaries: GNNs defined as Graph Convolution Notations. Consider a graph with n nodes and m edges (average degree d = m/n). Connectivity is given by the adjacency matrix A ∈ {0, 1}n×n and features are defined on nodes by X ∈ Rn×f0 with f0 the length of feature vectors. Given a matrix C, let Ci,j , Ci,:, and C:,j denote its (i, j)-th entry, i-th row, j-th column, respectively. For a finite sequence 〈ib〉 : i1, . . . , ib, we use C〈ib〉,: to denote the matrix whose rows are the ib-th rows of matrix C. We use to denote the element-wise (Hadamard) product. ‖ · ‖p denotes the entry-wise `p norm of a vector and ‖ · ‖F denotes the Frobenius norm. We use In ∈ Rn×n to denote the identity matrix, 1n ∈ Rn to denote the vector whose entries are all ones, and ein to denote the unit vector in Rn whose i-th entry is 1. The 0-1 indicator function is 1{·}. We use diag(c) to denote a diagonal matrix whose diagonal entries are from vector c. Andf represents concatenation along the last axis. We use superscripts to refer to different copies of same kind of variable. For example, X(l) ∈ Rn×fl denotes node representations on layer l. A Graph Neural Network (GNN) layer takes the node representation of a previous layer X(l) as input and produces a new representation X(l+1), where X = X(0) is the input features. A common framework for generalized graph convolution. Although many GNNs are designed following different guiding principles including neighborhood aggregation (GraphSAGE [2], PNA [11]), spatial convolution (GCN [1]), spectral filtering (ChebNet [12], CayleyNet [13], ARMA [14]), self-attention (GAT [3], Graph Transformers [15–17]), diffusion (GDC [18], DCNN [19]), WeisfeilerLehman (WL) alignment (GIN [4], 3WL-GNNs [20, 21]), or other graph algorithms ([22, 23]). Despite these differences, nearly all GNNs can be interpreted as performing message passing on node features, followed by feature transformation and an activation function. As pointed out by Balcilar et al. [24], GNNs can typically be written in the form X(l+1) = σ (∑ s C(s)X(l)W (l,s) ) , (1) where C(s) ∈ Rn×n denotes the s-th convolution matrix that defines the message passing operator, s ∈ Z+ denotes index of convolution, and σ(·) denotes the non-linearity. W (l,s) ∈ Rfl×fl+1 is the learnable linear weight matrix for the l-th layer and s-th filter. Within this common framework, GNNs differ from each other by choice of convolution matrices C(s), which can be either fixed or learnable. A learnable convolution matrix relies on the inputs and learnable parameters and can be different in each layer (thus denoted as C(l,s)): C (l,s) i,j = C (s) i,j︸︷︷︸ fixed ·h(s) θ(l,s) (X (l) i,: , X (l) j,: )︸ ︷︷ ︸ learnable (2) where C(s) denotes the fixed mask of the s-th learnable convolution, which may depend on the adjacency matrix A and input edge features Ei,j . While h(s)(·, ·) : Rfl × Rfl → R can be any learnable model parametrized by θ(l,s). Sometimes a learnable convolution matrix may be further row-wise normalized as C(l,s)i,j ← C (l,s) i,j / ∑ j C (l,s) i,j , for example in GAT [3]. We stick to Eq. (2) in the main paper and discuss row-wise normalization in Appendices A and E. The receptive field of a layer of graph convolution (Eq. (1)) is defined as a set of nodesR1i whose features {X (l) j,: | j ∈ Ri} determines X(l+1)i,: . We re-formulate some popular GNNs into this generalized graph convolution framework; see Table 1 and Appendix A for more. The back-propagation rule of GNNs defined by Eq. (1) is as follows, ∇X(l)` = ∑ s ( C(l,s) )T( ∇X(l+1)` σ′ ( σ−1 ( X(l+1) )))( W (l,s) )T , (3) which can also be understood as a form of message passing. σ′ and σ−1 are the derivative and inverse of σ respectively and ∇X(l+1)` σ′ ( σ−1(X(l+1)) ) is the gradients back-propagated through the non-linearity. 3 Scalability Problem and Theoretical Framework When a graph is large, we are forced to mini-batch the graph by sampling a subset of b n nodes in each iteration. Say the node indices are i1, . . . , ib and a mini-batch of node features is denoted by XB = X〈ib〉,:. To mini-batch efficiently for any model, we hope to fetch Θ(b) information to the training device, spend Θ(Lb) training time per iteration while taking (n/b) iterations to traverse through the entire dataset. However, it is intrinsically difficult for most of the GNNs to meet these three scalability requirements at the same time. The receptive field of L layers of graph convolution (Eq. (1)) is recursively given by RLi = ⋃ j∈R1i RL−1j (starting with R1i ⊇ {i} ∪ Ni), and its size grows exponentially with L. Thus, to optimize on a mini-batch of b nodes, we require Ω(bdL) inputs and training time per iteration. Sampling a subset of neighbors [2, 6] for each node in each layer does not change the exponential dependence on L. Although layer- [7, 25] and subgraph-sampling [9, 10] may require only Ω(b) inputs and Ω(Lb) training time per iteration, they are only able to consider an exponentially small proportion of messages compared with full-graph training. Most importantly, all existing sampling methods do not support dense convolution matrices with O(n2) non-zero terms. Please see Section 5 for a detailed comparison with sampling-based scalable methods after we introduce our framework. Idea of dimensionality reduction. We aim to develop a scalable algorithm for any GNN models that can be re-formulated as Eq. (1), where the convolution matrix can be either fixed or learnable, and either sparse or dense. The major obstacle to scalability is that, for each layer of graph convolution, to compute a mini-batch of forward-passed features X(l+1)B = X (l+1) 〈ib〉,: , we need O(n) entries of C (l,s) B = C (l,s) 〈ib〉,: and X (l), which will not fit in device memory. Our goal is to apply a dimensionality reduction to both convolution and node feature matrices, and then apply convolution using compressed “sketches” of C(l,s)B and X (l). More specifically, we look for a projection matrix R ∈ Rn×k with k n, such that the product of low-dimensional sketches C̃ (l,s) B = C (l,s) B R ∈ Rb×k and X̃(l) = RTX(l) ∈ Rk×fl is approximately the same as C (l,s) B X (l). The approximated product (of all nodes) C̃(l,s)X̃(l) = C(l,s)RRTX(l) can also be regarded as the result of using a low-rank approximation C(l,s)RRT ∈ Rn×n of the convolution matrix such that rank ( C(l,s)RRT ) ≤ k. The distributional Johnson–Lindenstrauss lemma [26] (JL for short) shows the existence of such projectionR withm = Θ(log(n)), and the following result by Kane and Nelson [27] shows that R can be chosen to quite sparse: Theorem 1. For any convolution matrix C ∈ Rn×n, any column vector X:,a ∈ Rn of the node feature matrix X ∈ Rn×f (where a = 1, . . . , f ) and ε > 0, there exists a projection matrix R ∈ Rn×k (drawn from a distribution) with only an O(ε)-fraction of entries non-zero, such that Pr ( ‖CRRTX:,a − CX:,a‖2 < ε‖CX:,a‖2 ) > 1− δ, (4) with k = Θ(log(n)/ε2) and δ = O(1/n). Now, the sketches C̃(l,s)B and X̃ (l) take up O(b log(n)) and Θ(fl log(n)) memory respectively and can fit into the training and inference device. The sparsity of projection matrix R is favorable because:(1) if the convolution matrix C(l,s) is sparse (e.g., direct-neighbor message passing where only O(d/n)-fraction of entries are non-zero), only an O(εd)-fraction of entries are non-zero in the sketch C̃(l,s); (2) During training, X̃(l) is updated in a “streaming” fashion using each minibatch’s inputs X(l)B , and a sparse R reduces the computation time by a factor of O(ε). However, the projection R produced following the sparse JL-lemma [27] is randomized and requires O(log2(n)) uniform random bits to sample. It is difficult to combine this with the deterministic feed-forward and back-propagation rules of neural networks, and there is no clue when and how we should update the projection matrix. Moreover, randomized projections destroy the “identity” of each node, and for learnable convolution matrices (Eq. (2)), it is impossible to compute the convolution matrix only using the sketch of features X̃(l). For this idea to be useful, we need a deterministic and identity-preserving construction of the projection matrix R ∈ Rn×k to avoid these added complexities. 4 Proposed Method: Vector Quantized GNN Dimensionality reduction using Vector Quantization (VQ). A natural and widely-used method to reduce the dimensionality of data in a deterministic and identity-preserving manner is Vector Quantization [28] (VQ), a classical data compression algorithm that can be formulated as the following optimization problem: min R∈{0,1}n×k , X̃∈Rk×f ‖X −RX̃‖F s.t. Ri,: ∈ {e1k, . . . , ekk}, (5) which is classically solved via k-means [28]. Here the sketch of features X̃ is called the feature “codewords.” R is called the codeword assignment matrix, whose rows are unit vectors in Rk, i.e., Ri,v = 1 if and only if the i-th node is assigned to the v-th cluster in k-means. The objective in Eq. (5) is called Within-Cluster Sum of Squares (WCSS), and we can define the relative error of VQ as = ‖X − RX̃‖F /‖X‖F . The rows of X̃ are the k codewords (i.e., centroids in k-means), and can be computed as X̃ = diag−1(RT1n)RTX , which is slightly different from the definition in Section 3 as a row-wise normalization of RT is required. The sketch of the convolution matrix C̃ can still be computed as C̃ = CR. In general, VQ provides us a principled framework to learn the low-dimensional sketches X̃ and C̃, in a deterministic and node-identity-preserving manner. However, to enable mini-batch training and inference of GNNs using VQ, three more questions need to be answered: • How to approximate the forward-passed mini-batch features of nodes using the learned codewords? • How to back-propagate through VQ and estimate the mini-batch gradients of nodes? • How to update the codewords and assignment matrix along with the training of GNN? In the following part of this section, we introduce the VQ-GNN algorithm by answering all the three questions and presenting a scalability analysis. Approximated forward and backward message passing. To approximate the forward pass through a GNN layer (Eq. (1)) with a mini-batch of nodes 〈ib〉, we can divide the messages into two categories: intra-mini-batch messages, and messages from out-of-mini-batch nodes; see the right figure of Fig. 1. Intra-mini-batch messages C(l,s)in X (l) B can always be computed exactly, where C(l,s)in = (C (l,s) B ):,〈ib〉 ∈ Rb×b, because they only rely on the previous layer’s node features of the current mini-batch. Equipped with the codewords X̃(l) and the codeword assignment of all nodes R(l), we can approximate the messages from out-of-mini-batch nodes as C̃ (l,s) out X̃ (l), where X̃(l) = diag−1(RT1n)RTX(l) as defined above and C̃ (l,s) out = C (l,s) out R. Here, C (l,s) out is the remaining part of the convolution matrix after removing the intra-mini-batch messages, thus (C(l,s)out ):,j = (C (l,s) B ):,j1{j ∈ 〈ib〉} for any j ∈ {1, . . . , n}, and C̃(l,s) is the sketch of C (l,s) out . In general, we can easily approximate the forward-passed mini-batch features X (l+1) B by X̂ (l+1) B = σ (∑ s(C (l,s) in X (l) B + C̃ (l,s) out X̃ (l))W (l,s) ) . However, the above construction of X̂(l+1)B does not allow us to back-propagate through VQ straightforwardly using chain rules. During back-propagation, we aim at approximating the previous layer’s mini-batch gradients ∇ X (l) B ` given the gradients of the (approximated) output ∇ X̂ (l+1) B ` (Eq. (3)). Firstly, we do not know how to compute the partial derivative of C̃(l,s)out and X̃ (l) with respect to X(l)B , because the learning and updating of VQ codewords and assignment are data dependent and are usually realized by an iterative optimization algorithm. Thus, we need to go through an iterative computation graph to evaluate the partial derivative of R(l) with respect to X(l)B , which requires access to many historical features and gradients, thus violating the scalability constraints. Secondly, even if we apply some very rough approximation during back-propagation as in [29], that is, assuming that the partial derivative of R(l) with respect to X(l)B can be ignored (i.e., the codeword assignment matrix is detached from the computation graph, known as “straight through” back-propagation), we are not able to evaluate the derivatives of codewords X̃(l) because they rely on some node features out of the current mini-batch and are not in the training device. Generally speaking, designing a back-propagation rule for VQ under the mini-batch training setup is a challenging new problem. It is helpful to re-examine what is happening when we back-propagate on the full graph. In Section 2, we see that back-propagation of a layer of convolution-based GNN can also be realized by message passing (Eq. (3)). In Fig. 2, we show the messages related to a mini-batch of nodes can be classified into three types. The “green” and “red” messages are the intra-mini-batch messages and the messages from out-of-mini-batch nodes, respectively. Apart from them, although the “blue” messages to out-of-mini-batch nodes do not contribute to the forward-passed mini-batch features, they are used during back-propagation and are an important part of the back-propagated mini-batch gradients. Since both forward-pass and back-propagation can be realized by message passing, can we approximate the back-propagated mini-batch gradients ∇ X (l) B ` in a symmetric manner? We can introduce a set of gradient codewords G̃(l+1) = diag−1(RT1n)RTG(l+1) using the same assignment matrix, where G(l+1) = ∇X̂(l+1)` σ ′(σ−1(X(l+1))) is the gradients back-propagated through non-linearity. Each gradient codeword corresponds one-to-one with a feature codeword since we want to use only one assignment matrix R. Each pair of codewords are concatenated together during VQ updates. Following this idea, we define the approximated forward and backward message passing as follows:[ X̂ (l+1) B • ] = σ (∑ s [ C (l,s) in C̃ (l,s) out (C̃(l,s)T)out 0 ] ︸ ︷︷ ︸ approx. message passing weight matrix C(l,s) [ X (l) B X̃(l) ] ︸ ︷︷ ︸ mini-batch features and feat. codewords W (l,s) ) , (6) ∇̂X(l)B ` • =∑ s ( C (l,s) )T [G(l+1)B G̃(l+1) ] ︸ ︷︷ ︸ mini-batch gradients and grad. codewords ( W (l,s) )T , (7) where C (l,s) ∈ R(b+m)×(b+m) is the approximated message passing weight matrix and is shared during the forward-pass and back-propagation process. The lower halves of the left-hand side vectors of Eqs. (6) and (7) are used in neither the forward nor the backward calculations and are never calculated during training or inference. The approximated forward and backward message passing enables the end-to-end mini-batch training and inference of GNNs and is the core of our VQ-GNN framework. Error-bounds on estimated features and gradients. We can effectively upper bound the estimation errors of mini-batch features and gradients using the relative error of VQ under some mild conditions. For ease of presentation, we assume the GNN has only one convolution matrix in the following theorems. Theorem 2. If the VQ relative error of l-th layer is (l), the convolution matrix C(l) is either fixed or learnable with the Lipschitz constant of hθ(l)(·) : R2fl → R upper-bounded by Lip(hθ(l)), and the Lipschitz constant of the non-linearity is Lip(σ), then the estimation error of forward-passed mini-batch features satisfies, ‖X̂(l+1)B −X (l+1) B ‖F ≤ (l) · (1 +O(Lip(hθ(l))))Lip(σ)‖C(l)‖F ‖X(l)‖F ‖W (l)‖F . (8) Corollary 3. If the conditions in Theorem 2 hold and the non-linearity satisfies |σ′(z)| ≤ σ′max for any z ∈ R, then the estimation error of back-propagated mini-batch gradients satisfies, ‖∇̂ X (l) B `−∇ X (l) B `‖F ≤ (l) · (1 +O(Lip(hθ(l)))σ′max‖C(l)‖F ‖∇X(l+1)`‖F ‖W (l)‖F . (9) Note that the error bounds rely on the Lipschitz constant of h(·) when the convolution matrix is learnable. In practice, we can Lipshitz regularize GNNs like GAT [3] without affecting their performance; see Appendix E. VQ-GNN: the complete algorithm and analysis of scalability. The only remaining question is how to update the learned codewords and assignments during training? In this paper, we use the VQ update rule proposed in [29], which updates the codewords as exponential moving averages of the mSeini-batch inputs; see Appendix E for the detailed algorithm. We find such an exponential moving average technique suits us well for the mini-batch training of GNNs and resembles the online k-means algorithm. See Fig. 3 for the schematic diagram of VQ-GNN, and the complete pseudo-code is in Appendix E. With VQ-GNN, we can mini-batch train and perform inference on large graphs using GNNs, just like a regular neural network (e.g., MLP). We have to maintain a small codebook of k codewords and update it for each iteration, which takes an extra O(Lkf) memory and O(Lnkf) training time per epoch, where L and f are the numbers of layers and (hidden) features of the GNN respectively. We can effectively preserve all messages related to a mini-batch while randomly sampling nodes from the graph. The number of intra-mini-batch messages is O(b2d/n) when the nodes are sampled randomly. Thus we only need to pass O(b2d/n + bk) messages per iteration and O(bd + nk) per epoch. In practice, when combined with techniques including product VQ and implicit whitening (see Appendix E), we can further improve the stability and performance of VQ-GNN. These theoretical and experimental analyses justify the efficiency of the proposed VQ-GNN framework. 5 Related Work In this section, we review some of the recent scalable GNN methods and analyze their theoretical memory and time complexities, with a focus on scalable algorithms that can be universally applied to a variety of GNN models (like our VQ-GNN framework), including NS-SAGE2 [2], Cluster-GCN [9], and GraphSAINT [10]. We consider GCN here as the simplest benchmark. For a GCN with L layers and f -dimensional (hidden) features in each layer, when applied to a sparse graph with n nodes and m edges (i.e., average degree d = m/n) for “full-graph” training and inference: the memory usage is O(Lnf + Lf2) and the training/inference time is O(Lmf + Lnf2). We further assume the graph is large and consider the training and inference device memory is O(b) where b is the mini-batch 2We call the neighbor sampling method in [2] NS-SAGE and the GNN model in the same paper SAGE-Mean to avoid ambiguity. size (i.e., the memory bottleneck limits the mini-batch size), and generally d b n m holds. We divide sampling baselines into three categories, and the complexities of selected methods are in Table 2. See Appendix D for more related work discussions. Neighbor-sampling. Neighbor sampling scheme chooses a subset of neighbors in each layer to reduce the amount of message passing required. NS-SAGE [2] samples r neighbors for each node and only aggregates the messages from the sampled node. For a GNN with L layers, O(brL) nodes are sampled in a mini-batch, which leads to the complexities growing exponentially with the number of layers L; see Table 2. Therefore, NS-SAGE is not scalable on large graphs for a model with an arbitrary number of layers. NS-SAGE requires all the neighbors to draw non-stochastic predictions in the inference phase, resulting in a O(dL) inference time since we cannot fit O(n) nodes all at once to the device. VR-GCN [6] proposes a variance reduction technique to further reduce the size r of sampled neighbors. However, VR-GCN requires a O(Lnf) side memory of all the nodes’ hidden features and suffers from this added memory complexity. Layer-sampling. These methods perform node sampling independently in each layer, which results in a constant sample size across all layers and limits the exponential expansion of neighbor size. FastGCN [7] applies importance sampling to reduce variance. Adapt [25] improves FastGCN by an additional sampling network but also incurs the significant overhead of the sampling algorithm. Subgraph-sampling. Some proposed schemes sample a subgraph for each mini-batch and perform forward and backward passes on the same subgraph across all layers. Cluster-GCN [9] partitions a large graph into several densely connected subgraphs and samples a subset of subgraphs (with edges between clusters added back) to train in each mini-batch. Cluster-GCN requires O(m) precomputation time and O(bd) time to recover the intra-cluster edges when loading each mini-batch. GraphSAINT [10] samples a set of nodes and takes the induced subgraph for mini-batch training. We consider the best-performing variant, GraphSAINT-RW, which uses L steps of random walk to induce subgraph from b randomly sampled nodes. O(Lb) nodes and edges are covered in each of the n/b mini-batches. Although O(Ln) nodes are sampled with some repetition in an epoch, the number of edges covered (i.e., messages considered in each layer of a GNN) is also O(Ln) and is usually much smaller than m. GraphSAINT-Node, which randomly samples nodes for each mini-batch, does not suffer from this L factor in the complexities. However, its performance is worse than GraphSAINT-RW’s. Like NS-SAGE and some other sampling methods, Cluster-GCN and GraphSAINT-RW cannot draw predictions on a randomly sampled subgraph in the inference phase. Thus they suffer from the same O(dL) inference time complexity as NS-SAGE; see Table 2. 6 Experiments In this section, we verify the efficiency, robustness, and universality of VQ-GNN using a series of experiments. See Appendix F for implementation details and Appendix G for ablation studies and more experiments. Scalability and efficiency: memory usage, convergence, training and inference time. We summarize the memory usage of scalable methods and our VQ-GNN framework in Table 3. Based on the implementations of the PyG library [30], memory consumption of GNN models usually grows linearly with respect to both the number of nodes and the number of edges in a mini-batch. On the ogbn-arxiv benchmark, we fix the number of gradient-descended nodes and the number of messages passed in a mini-batch to be 85K and 1.5M respectively for fair comparisons among the sampling methods and our approach. VQ-GNN might require some small extra memory when provided with the same amount of nodes per batch, which is the cost to retain all the edges from the original graph. However, our VQ-GNN framework can effectively preserve all the edges connected to a mini-batch of nodes (i.e., never drop edges); see Fig. 1. Thus when we fix the number of messages passed per batch, our method can show significant memory efficiency compared with the sampling baselines. Fig. 4 shows the convergence comparison of various scalability methods, where we see VQ-GNN is superior in terms of the convergence speed with respect to the training time. When training GCN and SAGE-Mean on the ogbn-arxiv benchmark for a specific amount of time (e.g., 100 s), the validation performance of VQ-GNN is always the highest. The training time in Fig. 4 excludes the time for data loading, pre-processing, and validation set evaluation. Our VQ-GNN approach also leads to compelling inference speed-ups. Despite the training-efficiency issues of GNNs, conducting inference on large-scale graphs suffers from some unique challenges. According to our discussions in Section 5, and following the standard implementations provided by the Open Graph Benchmark (OGB) [5], the three sampling-based baselines (which share the same inference procedure) require all of the L-hop neighbors of the mini-batch nodes to lie on the device at once during the inference phase. The inference time of SAGE-Mean trained with sampling-methods on the ogbn-arxiv benchmark is 1.61 s, while our method can accelerate inference by an order of magnitude and reduce the inference time to 0.40 s. Performance comparison across various datasets, settings, and tasks. We validate the efficacy of our method on various benchmarks in Table 4. The four representative benchmarks are selected because they have very different types of datasets, settings, and tasks. The ogbn-arxiv benchmark is a common citation network of arXiv papers, while Reddit is a very dense social network of Reddit posts, which has much more features per node and larger average node degree; see Table 6 in Appendix F for detailed statistics of datasets. PPI is a node classification benchmark under the inductive learning setting, i.e., neither attributes nor connections of test nodes are present during training, while the other benchmarks are all transductive. VQ-GNN can be applied under the inductive setting with only one extra step: during the inference stage, we now need to find the codeword assignments (i.e., the nearest codeword) of the test nodes before making predictions since we have no access to the test nodes during training. Neither the learned codewords nor the GNN parameters are updated during inference. ogbl-collab is a link prediction benchmark where the labels and loss are intrinsically different. It is very challenging for a scalable method to perform well on all benchmarks. In Table 4, we confirm that VQ-GNN is more robust than the three sampling-based methods. Across the four benchmarks, VQ-GNN can always achieve performance similar with or better than the oracle “full-graph” training performance, while the other scalable algorithms may suffer from performance drop in some cases. For example, NS-SAGE fails when training GAT on ogbl-collab, Cluster-GCN consistently falls behind on PPI, and GraphSAINT-RW’s performance drops on the ogbl-collab when using SAGEMean and GAT backbones. We think the robust performance of VQ-GNN is its unique value among the many other scalable solutions. VQ-GNN framework is robust because it provides bounded approximations of “full-graph” training (Theorem 2 and Corollary 3), while most of the other scalable algorithms do not enjoy such a theoretical guarantee. VQ-GNN is also universal to various backbone models, including but not limited to GCN, SAGE-Mean, and GAT shown here; see Appendix G for more experiments on GNNs that utilize multi-hop neighborhoods and global context, e.g., graph transformers. 7 Conclusion Summary of our framework: strengths, weaknesses, future directions, and broader impacts. This paper introduced the proposed VQ-GNN framework, which can scale most state-of-the-art GNNs to large graphs through a principled and fundamentally different approach compared with samplingbased methods. We have shown both theoretically and experimentally that our approach is efficient in memory usage, training and inference time, and convergence speed. VQ-GNN can be universally applied to most GNN models and different graph learning tasks and can equivalently scale-up GNNs utilizing many-hops-away or global context for each layer. However, the performance of VQ-GNN relies on the quality of approximation provided by VQ. In practice, for VQ to work adequately in GNN, a set of techniques are necessary. Because of the limited time, we did not heuristically explore all possible techniques or optimize the VQ design. Given that our preliminary design of VQ in GNN already achieved competitive performance compared with the state-of-the-art sampling baselines, we hypothesize that further optimization of VQ design could improve performance. We hope our work opens up promising new avenues of research for scaling up GNNs, which also has the potential to be applied to other data domains wherever the size of a single sample is large, e.g., long time-series or videos. Considering broader impacts, we view our work mainly as a methodological and theoretical contribution, which paves the way for more resource-efficient graph representation learning. We envision our methodological innovations can enable more scalable ways to do large-network analysis for social good. However, progress in graph embedding learning might also trigger other hostile social network analyses, e.g., extracting fine-grained user interactions for social tracking. Acknowledgments and Disclosure of Funding Goldstein, Kong, and Chen were supported by the Office of Naval Research, AFOSR MURI program, the DARPA Young Faculty Award, and the National Science Foundation Division of Mathematical Sciences. Additional support was provided by Capital One Bank and JP Morgan Chase. Huang and Ding were supported by a startup fund from the Department of Computer Science of the University of Maryland, National Science Foundation IIS-1850220 CRII Award 030742-00001, DOD-DARPA-Defense Advanced Research Projects Agency Guaranteeing AI Robustness against Deception (GARD), Air Force Material Command, and Adobe, Capital One and JP Morgan faculty fellowships. Li and Dickerson were supported in part by NSF CAREER Award IIS-1846237, NSF D-ISN Award #2039862, NSF Award CCF-1852352, NIH R01 Award NLM-013039-01, NIST MSE Award #20126334, DARPA GARD #HR00112020007, DoD WHS Award #HQ003420F0035, ARPA-E Award #4334192 and a Google Faculty Research Award.
1. How does the paper scale up GNN training, and what are the key techniques used? 2. What are the strengths and weaknesses of the proposed approach compared to other GNN methods? 3. How does the reviewer assess the experimental results and their significance regarding scalability? 4. What are some suggestions for future experiments or improvements to the proposed method?
Summary Of The Paper Review
Summary Of The Paper This paper is about using vector quantization to scale up GNN. The main idea is to use quantized representations combined with a low-rank projection of graph convolution matrix to avoid the "neighbor explosion" problem of GNNs. The proposed VQ-GNN is applied for node classification and link prediction, and shows comparable results. Review Although this paper is about scaling up the GNN training, the experiment is only on small-scale datasets, only one dataset for node classification and one for link prediction. It would be interesting to see larger scale experiments with nodes size in millions. Seems this paper is about scaling up the training, it would be interesting to check the real training time vs accuracy. Why in table 4, per epoch time is faster than Cluster-GCN and GraphSAINT? It seems to me that per epoch the proposed method will be slower as it needs additional time to update the codebook. How is the performance w.r.t. number of layers? how about inductive and transductive settings?
NIPS
Title VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization Abstract Most state-of-the-art Graph Neural Networks (GNNs) can be defined as a form of graph convolution which can be realized by message passing between direct neighbors or beyond. To scale such GNNs to large graphs, various neighbor-, layer-, or subgraph-sampling techniques are proposed to alleviate the “neighbor explosion” problem by considering only a small subset of messages passed to the nodes in a mini-batch. However, sampling-based methods are difficult to apply to GNNs that utilize many-hops-away or global context each layer, show unstable performance for different tasks and datasets, and do not speed up model inference. We propose a principled and fundamentally different approach, VQ-GNN, a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance. In contrast to sampling-based techniques, our approach can effectively preserve all the messages passed to a mini-batch of nodes by learning and updating a small number of quantized reference vectors of global node representations, using VQ within each GNN layer. Our framework avoids the “neighbor explosion” problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix. We show that such a compact low-rank version of the gigantic convolution matrix is sufficient both theoretically and experimentally. In company with VQ, we design a novel approximated message passing algorithm and a nontrivial back-propagation rule for our framework. Experiments on various types of GNN backbones demonstrate the scalability and competitive performance of our framework on large-graph node classification and link prediction benchmarks. 1 Introduction The rise of Graph Neural Networks (GNNs) has brought the modeling of complex graph data into a new era. Using message-passing, GNNs iteratively share information between neighbors in a graph to make predictions of node labels, edge labels, or graph-level properties. A number of powerful GNN architectures [1–4] have been widely applied to solve down-stream tasks such as recommendation, social analysis, visual recognition, etc. With the soaring size of realistic graph datasets and the industrial need to model them efficiently, GNNs are hindered by a scalability problem. An L-layer GNN aggregates information from all L-hop neighbors, and standard training routines require these neighbors to all lie on the GPU at once. This prohibits full-batch training when facing a graph with millions of nodes [5]. ∗Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). A number of sampling-based methods have been proposed to accommodate large graphs with limited GPU resources. These techniques can be broadly classified into three categories: (1) Neighborsampling methods [2, 6] sample a fixed-number of neighbors for each node; (2) Layer-sampling methods [7, 8] sample nodes in each layer independently with a constant sample size; (3) Subgraphsampling methods [9, 10] sample a subgraph for each mini-batch and perform forward and backpropagation on the same subgraph across all layers. Although these sampling-based methods may significantly speed up the training time of GNNs, they suffer from the following three major drawbacks: (1) At inference phase, sampling methods require all the neighbors to draw non-stochastic predictions, resulting in expensive predictions if the full graph cannot be fit on the inference device; (2) As reported in [5] and in Section 6, state-of-the-art sampling-baselines fail to achieve satisfactory results consistently across various tasks and datasets; (3) Sampling-based methods cannot be universally applied to GNNs that utilize many-hop or global context in each layer, which hinders the application of more powerful GNNs to large graphs. This paper presents VQ-GNN, a GNN framework using vector quantization to scale most state-ofthe-art GNNs to large graphs through a principled and fundamentally different approach compared with the sampling-based methods. We explore the idea of using vector quantization (VQ) as a means of dimensionality reduction to learn and update a small number of quantized reference vectors (codewords) of global node representations. In VQ-GNN, mini-batch message passing in each GNN layer is approximated by a VQ codebook update and an approximated form of message passing between the mini-batch of nodes and codewords; see Fig. 1. Our approach avoids the “neighbor explosion” problem and enables mini-batch training and inference of GNNs. In contrast to samplingbased techniques, VQ-GNN can effectively preserve all the messages passed to a mini-batch of nodes. We theoretically and experimentally show that our approach is efficient in terms of memory usage, training/inference time, and convergence speed. Experiments on various GNN backbones demonstrate the competitive performance of our framework compared with the full-graph training baseline and sampling-based scalable algorithms. Paper organization. The remainder of this paper is organized as follows. Section 2 summarizes GNNs that can be re-formulated into a common framework of graph convolution. Section 3 defines the scalability challenge of GNNs and shows that dimensionality reduction is a potential solution. In Section 4, we describe our approach, VQ-GNN, from theoretical framework to algorithm design and explain why it solves the scalability issue of most GNNs. Section 5 compares our approach to the sampling-based methods. Section 6 presents a series of experiments that validate the efficiency, robustness, and universality of VQ-GNN. Finally, Section 7 concludes this paper with a summary of limitations and broader impacts. 2 Preliminaries: GNNs defined as Graph Convolution Notations. Consider a graph with n nodes and m edges (average degree d = m/n). Connectivity is given by the adjacency matrix A ∈ {0, 1}n×n and features are defined on nodes by X ∈ Rn×f0 with f0 the length of feature vectors. Given a matrix C, let Ci,j , Ci,:, and C:,j denote its (i, j)-th entry, i-th row, j-th column, respectively. For a finite sequence 〈ib〉 : i1, . . . , ib, we use C〈ib〉,: to denote the matrix whose rows are the ib-th rows of matrix C. We use to denote the element-wise (Hadamard) product. ‖ · ‖p denotes the entry-wise `p norm of a vector and ‖ · ‖F denotes the Frobenius norm. We use In ∈ Rn×n to denote the identity matrix, 1n ∈ Rn to denote the vector whose entries are all ones, and ein to denote the unit vector in Rn whose i-th entry is 1. The 0-1 indicator function is 1{·}. We use diag(c) to denote a diagonal matrix whose diagonal entries are from vector c. Andf represents concatenation along the last axis. We use superscripts to refer to different copies of same kind of variable. For example, X(l) ∈ Rn×fl denotes node representations on layer l. A Graph Neural Network (GNN) layer takes the node representation of a previous layer X(l) as input and produces a new representation X(l+1), where X = X(0) is the input features. A common framework for generalized graph convolution. Although many GNNs are designed following different guiding principles including neighborhood aggregation (GraphSAGE [2], PNA [11]), spatial convolution (GCN [1]), spectral filtering (ChebNet [12], CayleyNet [13], ARMA [14]), self-attention (GAT [3], Graph Transformers [15–17]), diffusion (GDC [18], DCNN [19]), WeisfeilerLehman (WL) alignment (GIN [4], 3WL-GNNs [20, 21]), or other graph algorithms ([22, 23]). Despite these differences, nearly all GNNs can be interpreted as performing message passing on node features, followed by feature transformation and an activation function. As pointed out by Balcilar et al. [24], GNNs can typically be written in the form X(l+1) = σ (∑ s C(s)X(l)W (l,s) ) , (1) where C(s) ∈ Rn×n denotes the s-th convolution matrix that defines the message passing operator, s ∈ Z+ denotes index of convolution, and σ(·) denotes the non-linearity. W (l,s) ∈ Rfl×fl+1 is the learnable linear weight matrix for the l-th layer and s-th filter. Within this common framework, GNNs differ from each other by choice of convolution matrices C(s), which can be either fixed or learnable. A learnable convolution matrix relies on the inputs and learnable parameters and can be different in each layer (thus denoted as C(l,s)): C (l,s) i,j = C (s) i,j︸︷︷︸ fixed ·h(s) θ(l,s) (X (l) i,: , X (l) j,: )︸ ︷︷ ︸ learnable (2) where C(s) denotes the fixed mask of the s-th learnable convolution, which may depend on the adjacency matrix A and input edge features Ei,j . While h(s)(·, ·) : Rfl × Rfl → R can be any learnable model parametrized by θ(l,s). Sometimes a learnable convolution matrix may be further row-wise normalized as C(l,s)i,j ← C (l,s) i,j / ∑ j C (l,s) i,j , for example in GAT [3]. We stick to Eq. (2) in the main paper and discuss row-wise normalization in Appendices A and E. The receptive field of a layer of graph convolution (Eq. (1)) is defined as a set of nodesR1i whose features {X (l) j,: | j ∈ Ri} determines X(l+1)i,: . We re-formulate some popular GNNs into this generalized graph convolution framework; see Table 1 and Appendix A for more. The back-propagation rule of GNNs defined by Eq. (1) is as follows, ∇X(l)` = ∑ s ( C(l,s) )T( ∇X(l+1)` σ′ ( σ−1 ( X(l+1) )))( W (l,s) )T , (3) which can also be understood as a form of message passing. σ′ and σ−1 are the derivative and inverse of σ respectively and ∇X(l+1)` σ′ ( σ−1(X(l+1)) ) is the gradients back-propagated through the non-linearity. 3 Scalability Problem and Theoretical Framework When a graph is large, we are forced to mini-batch the graph by sampling a subset of b n nodes in each iteration. Say the node indices are i1, . . . , ib and a mini-batch of node features is denoted by XB = X〈ib〉,:. To mini-batch efficiently for any model, we hope to fetch Θ(b) information to the training device, spend Θ(Lb) training time per iteration while taking (n/b) iterations to traverse through the entire dataset. However, it is intrinsically difficult for most of the GNNs to meet these three scalability requirements at the same time. The receptive field of L layers of graph convolution (Eq. (1)) is recursively given by RLi = ⋃ j∈R1i RL−1j (starting with R1i ⊇ {i} ∪ Ni), and its size grows exponentially with L. Thus, to optimize on a mini-batch of b nodes, we require Ω(bdL) inputs and training time per iteration. Sampling a subset of neighbors [2, 6] for each node in each layer does not change the exponential dependence on L. Although layer- [7, 25] and subgraph-sampling [9, 10] may require only Ω(b) inputs and Ω(Lb) training time per iteration, they are only able to consider an exponentially small proportion of messages compared with full-graph training. Most importantly, all existing sampling methods do not support dense convolution matrices with O(n2) non-zero terms. Please see Section 5 for a detailed comparison with sampling-based scalable methods after we introduce our framework. Idea of dimensionality reduction. We aim to develop a scalable algorithm for any GNN models that can be re-formulated as Eq. (1), where the convolution matrix can be either fixed or learnable, and either sparse or dense. The major obstacle to scalability is that, for each layer of graph convolution, to compute a mini-batch of forward-passed features X(l+1)B = X (l+1) 〈ib〉,: , we need O(n) entries of C (l,s) B = C (l,s) 〈ib〉,: and X (l), which will not fit in device memory. Our goal is to apply a dimensionality reduction to both convolution and node feature matrices, and then apply convolution using compressed “sketches” of C(l,s)B and X (l). More specifically, we look for a projection matrix R ∈ Rn×k with k n, such that the product of low-dimensional sketches C̃ (l,s) B = C (l,s) B R ∈ Rb×k and X̃(l) = RTX(l) ∈ Rk×fl is approximately the same as C (l,s) B X (l). The approximated product (of all nodes) C̃(l,s)X̃(l) = C(l,s)RRTX(l) can also be regarded as the result of using a low-rank approximation C(l,s)RRT ∈ Rn×n of the convolution matrix such that rank ( C(l,s)RRT ) ≤ k. The distributional Johnson–Lindenstrauss lemma [26] (JL for short) shows the existence of such projectionR withm = Θ(log(n)), and the following result by Kane and Nelson [27] shows that R can be chosen to quite sparse: Theorem 1. For any convolution matrix C ∈ Rn×n, any column vector X:,a ∈ Rn of the node feature matrix X ∈ Rn×f (where a = 1, . . . , f ) and ε > 0, there exists a projection matrix R ∈ Rn×k (drawn from a distribution) with only an O(ε)-fraction of entries non-zero, such that Pr ( ‖CRRTX:,a − CX:,a‖2 < ε‖CX:,a‖2 ) > 1− δ, (4) with k = Θ(log(n)/ε2) and δ = O(1/n). Now, the sketches C̃(l,s)B and X̃ (l) take up O(b log(n)) and Θ(fl log(n)) memory respectively and can fit into the training and inference device. The sparsity of projection matrix R is favorable because:(1) if the convolution matrix C(l,s) is sparse (e.g., direct-neighbor message passing where only O(d/n)-fraction of entries are non-zero), only an O(εd)-fraction of entries are non-zero in the sketch C̃(l,s); (2) During training, X̃(l) is updated in a “streaming” fashion using each minibatch’s inputs X(l)B , and a sparse R reduces the computation time by a factor of O(ε). However, the projection R produced following the sparse JL-lemma [27] is randomized and requires O(log2(n)) uniform random bits to sample. It is difficult to combine this with the deterministic feed-forward and back-propagation rules of neural networks, and there is no clue when and how we should update the projection matrix. Moreover, randomized projections destroy the “identity” of each node, and for learnable convolution matrices (Eq. (2)), it is impossible to compute the convolution matrix only using the sketch of features X̃(l). For this idea to be useful, we need a deterministic and identity-preserving construction of the projection matrix R ∈ Rn×k to avoid these added complexities. 4 Proposed Method: Vector Quantized GNN Dimensionality reduction using Vector Quantization (VQ). A natural and widely-used method to reduce the dimensionality of data in a deterministic and identity-preserving manner is Vector Quantization [28] (VQ), a classical data compression algorithm that can be formulated as the following optimization problem: min R∈{0,1}n×k , X̃∈Rk×f ‖X −RX̃‖F s.t. Ri,: ∈ {e1k, . . . , ekk}, (5) which is classically solved via k-means [28]. Here the sketch of features X̃ is called the feature “codewords.” R is called the codeword assignment matrix, whose rows are unit vectors in Rk, i.e., Ri,v = 1 if and only if the i-th node is assigned to the v-th cluster in k-means. The objective in Eq. (5) is called Within-Cluster Sum of Squares (WCSS), and we can define the relative error of VQ as = ‖X − RX̃‖F /‖X‖F . The rows of X̃ are the k codewords (i.e., centroids in k-means), and can be computed as X̃ = diag−1(RT1n)RTX , which is slightly different from the definition in Section 3 as a row-wise normalization of RT is required. The sketch of the convolution matrix C̃ can still be computed as C̃ = CR. In general, VQ provides us a principled framework to learn the low-dimensional sketches X̃ and C̃, in a deterministic and node-identity-preserving manner. However, to enable mini-batch training and inference of GNNs using VQ, three more questions need to be answered: • How to approximate the forward-passed mini-batch features of nodes using the learned codewords? • How to back-propagate through VQ and estimate the mini-batch gradients of nodes? • How to update the codewords and assignment matrix along with the training of GNN? In the following part of this section, we introduce the VQ-GNN algorithm by answering all the three questions and presenting a scalability analysis. Approximated forward and backward message passing. To approximate the forward pass through a GNN layer (Eq. (1)) with a mini-batch of nodes 〈ib〉, we can divide the messages into two categories: intra-mini-batch messages, and messages from out-of-mini-batch nodes; see the right figure of Fig. 1. Intra-mini-batch messages C(l,s)in X (l) B can always be computed exactly, where C(l,s)in = (C (l,s) B ):,〈ib〉 ∈ Rb×b, because they only rely on the previous layer’s node features of the current mini-batch. Equipped with the codewords X̃(l) and the codeword assignment of all nodes R(l), we can approximate the messages from out-of-mini-batch nodes as C̃ (l,s) out X̃ (l), where X̃(l) = diag−1(RT1n)RTX(l) as defined above and C̃ (l,s) out = C (l,s) out R. Here, C (l,s) out is the remaining part of the convolution matrix after removing the intra-mini-batch messages, thus (C(l,s)out ):,j = (C (l,s) B ):,j1{j ∈ 〈ib〉} for any j ∈ {1, . . . , n}, and C̃(l,s) is the sketch of C (l,s) out . In general, we can easily approximate the forward-passed mini-batch features X (l+1) B by X̂ (l+1) B = σ (∑ s(C (l,s) in X (l) B + C̃ (l,s) out X̃ (l))W (l,s) ) . However, the above construction of X̂(l+1)B does not allow us to back-propagate through VQ straightforwardly using chain rules. During back-propagation, we aim at approximating the previous layer’s mini-batch gradients ∇ X (l) B ` given the gradients of the (approximated) output ∇ X̂ (l+1) B ` (Eq. (3)). Firstly, we do not know how to compute the partial derivative of C̃(l,s)out and X̃ (l) with respect to X(l)B , because the learning and updating of VQ codewords and assignment are data dependent and are usually realized by an iterative optimization algorithm. Thus, we need to go through an iterative computation graph to evaluate the partial derivative of R(l) with respect to X(l)B , which requires access to many historical features and gradients, thus violating the scalability constraints. Secondly, even if we apply some very rough approximation during back-propagation as in [29], that is, assuming that the partial derivative of R(l) with respect to X(l)B can be ignored (i.e., the codeword assignment matrix is detached from the computation graph, known as “straight through” back-propagation), we are not able to evaluate the derivatives of codewords X̃(l) because they rely on some node features out of the current mini-batch and are not in the training device. Generally speaking, designing a back-propagation rule for VQ under the mini-batch training setup is a challenging new problem. It is helpful to re-examine what is happening when we back-propagate on the full graph. In Section 2, we see that back-propagation of a layer of convolution-based GNN can also be realized by message passing (Eq. (3)). In Fig. 2, we show the messages related to a mini-batch of nodes can be classified into three types. The “green” and “red” messages are the intra-mini-batch messages and the messages from out-of-mini-batch nodes, respectively. Apart from them, although the “blue” messages to out-of-mini-batch nodes do not contribute to the forward-passed mini-batch features, they are used during back-propagation and are an important part of the back-propagated mini-batch gradients. Since both forward-pass and back-propagation can be realized by message passing, can we approximate the back-propagated mini-batch gradients ∇ X (l) B ` in a symmetric manner? We can introduce a set of gradient codewords G̃(l+1) = diag−1(RT1n)RTG(l+1) using the same assignment matrix, where G(l+1) = ∇X̂(l+1)` σ ′(σ−1(X(l+1))) is the gradients back-propagated through non-linearity. Each gradient codeword corresponds one-to-one with a feature codeword since we want to use only one assignment matrix R. Each pair of codewords are concatenated together during VQ updates. Following this idea, we define the approximated forward and backward message passing as follows:[ X̂ (l+1) B • ] = σ (∑ s [ C (l,s) in C̃ (l,s) out (C̃(l,s)T)out 0 ] ︸ ︷︷ ︸ approx. message passing weight matrix C(l,s) [ X (l) B X̃(l) ] ︸ ︷︷ ︸ mini-batch features and feat. codewords W (l,s) ) , (6) ∇̂X(l)B ` • =∑ s ( C (l,s) )T [G(l+1)B G̃(l+1) ] ︸ ︷︷ ︸ mini-batch gradients and grad. codewords ( W (l,s) )T , (7) where C (l,s) ∈ R(b+m)×(b+m) is the approximated message passing weight matrix and is shared during the forward-pass and back-propagation process. The lower halves of the left-hand side vectors of Eqs. (6) and (7) are used in neither the forward nor the backward calculations and are never calculated during training or inference. The approximated forward and backward message passing enables the end-to-end mini-batch training and inference of GNNs and is the core of our VQ-GNN framework. Error-bounds on estimated features and gradients. We can effectively upper bound the estimation errors of mini-batch features and gradients using the relative error of VQ under some mild conditions. For ease of presentation, we assume the GNN has only one convolution matrix in the following theorems. Theorem 2. If the VQ relative error of l-th layer is (l), the convolution matrix C(l) is either fixed or learnable with the Lipschitz constant of hθ(l)(·) : R2fl → R upper-bounded by Lip(hθ(l)), and the Lipschitz constant of the non-linearity is Lip(σ), then the estimation error of forward-passed mini-batch features satisfies, ‖X̂(l+1)B −X (l+1) B ‖F ≤ (l) · (1 +O(Lip(hθ(l))))Lip(σ)‖C(l)‖F ‖X(l)‖F ‖W (l)‖F . (8) Corollary 3. If the conditions in Theorem 2 hold and the non-linearity satisfies |σ′(z)| ≤ σ′max for any z ∈ R, then the estimation error of back-propagated mini-batch gradients satisfies, ‖∇̂ X (l) B `−∇ X (l) B `‖F ≤ (l) · (1 +O(Lip(hθ(l)))σ′max‖C(l)‖F ‖∇X(l+1)`‖F ‖W (l)‖F . (9) Note that the error bounds rely on the Lipschitz constant of h(·) when the convolution matrix is learnable. In practice, we can Lipshitz regularize GNNs like GAT [3] without affecting their performance; see Appendix E. VQ-GNN: the complete algorithm and analysis of scalability. The only remaining question is how to update the learned codewords and assignments during training? In this paper, we use the VQ update rule proposed in [29], which updates the codewords as exponential moving averages of the mSeini-batch inputs; see Appendix E for the detailed algorithm. We find such an exponential moving average technique suits us well for the mini-batch training of GNNs and resembles the online k-means algorithm. See Fig. 3 for the schematic diagram of VQ-GNN, and the complete pseudo-code is in Appendix E. With VQ-GNN, we can mini-batch train and perform inference on large graphs using GNNs, just like a regular neural network (e.g., MLP). We have to maintain a small codebook of k codewords and update it for each iteration, which takes an extra O(Lkf) memory and O(Lnkf) training time per epoch, where L and f are the numbers of layers and (hidden) features of the GNN respectively. We can effectively preserve all messages related to a mini-batch while randomly sampling nodes from the graph. The number of intra-mini-batch messages is O(b2d/n) when the nodes are sampled randomly. Thus we only need to pass O(b2d/n + bk) messages per iteration and O(bd + nk) per epoch. In practice, when combined with techniques including product VQ and implicit whitening (see Appendix E), we can further improve the stability and performance of VQ-GNN. These theoretical and experimental analyses justify the efficiency of the proposed VQ-GNN framework. 5 Related Work In this section, we review some of the recent scalable GNN methods and analyze their theoretical memory and time complexities, with a focus on scalable algorithms that can be universally applied to a variety of GNN models (like our VQ-GNN framework), including NS-SAGE2 [2], Cluster-GCN [9], and GraphSAINT [10]. We consider GCN here as the simplest benchmark. For a GCN with L layers and f -dimensional (hidden) features in each layer, when applied to a sparse graph with n nodes and m edges (i.e., average degree d = m/n) for “full-graph” training and inference: the memory usage is O(Lnf + Lf2) and the training/inference time is O(Lmf + Lnf2). We further assume the graph is large and consider the training and inference device memory is O(b) where b is the mini-batch 2We call the neighbor sampling method in [2] NS-SAGE and the GNN model in the same paper SAGE-Mean to avoid ambiguity. size (i.e., the memory bottleneck limits the mini-batch size), and generally d b n m holds. We divide sampling baselines into three categories, and the complexities of selected methods are in Table 2. See Appendix D for more related work discussions. Neighbor-sampling. Neighbor sampling scheme chooses a subset of neighbors in each layer to reduce the amount of message passing required. NS-SAGE [2] samples r neighbors for each node and only aggregates the messages from the sampled node. For a GNN with L layers, O(brL) nodes are sampled in a mini-batch, which leads to the complexities growing exponentially with the number of layers L; see Table 2. Therefore, NS-SAGE is not scalable on large graphs for a model with an arbitrary number of layers. NS-SAGE requires all the neighbors to draw non-stochastic predictions in the inference phase, resulting in a O(dL) inference time since we cannot fit O(n) nodes all at once to the device. VR-GCN [6] proposes a variance reduction technique to further reduce the size r of sampled neighbors. However, VR-GCN requires a O(Lnf) side memory of all the nodes’ hidden features and suffers from this added memory complexity. Layer-sampling. These methods perform node sampling independently in each layer, which results in a constant sample size across all layers and limits the exponential expansion of neighbor size. FastGCN [7] applies importance sampling to reduce variance. Adapt [25] improves FastGCN by an additional sampling network but also incurs the significant overhead of the sampling algorithm. Subgraph-sampling. Some proposed schemes sample a subgraph for each mini-batch and perform forward and backward passes on the same subgraph across all layers. Cluster-GCN [9] partitions a large graph into several densely connected subgraphs and samples a subset of subgraphs (with edges between clusters added back) to train in each mini-batch. Cluster-GCN requires O(m) precomputation time and O(bd) time to recover the intra-cluster edges when loading each mini-batch. GraphSAINT [10] samples a set of nodes and takes the induced subgraph for mini-batch training. We consider the best-performing variant, GraphSAINT-RW, which uses L steps of random walk to induce subgraph from b randomly sampled nodes. O(Lb) nodes and edges are covered in each of the n/b mini-batches. Although O(Ln) nodes are sampled with some repetition in an epoch, the number of edges covered (i.e., messages considered in each layer of a GNN) is also O(Ln) and is usually much smaller than m. GraphSAINT-Node, which randomly samples nodes for each mini-batch, does not suffer from this L factor in the complexities. However, its performance is worse than GraphSAINT-RW’s. Like NS-SAGE and some other sampling methods, Cluster-GCN and GraphSAINT-RW cannot draw predictions on a randomly sampled subgraph in the inference phase. Thus they suffer from the same O(dL) inference time complexity as NS-SAGE; see Table 2. 6 Experiments In this section, we verify the efficiency, robustness, and universality of VQ-GNN using a series of experiments. See Appendix F for implementation details and Appendix G for ablation studies and more experiments. Scalability and efficiency: memory usage, convergence, training and inference time. We summarize the memory usage of scalable methods and our VQ-GNN framework in Table 3. Based on the implementations of the PyG library [30], memory consumption of GNN models usually grows linearly with respect to both the number of nodes and the number of edges in a mini-batch. On the ogbn-arxiv benchmark, we fix the number of gradient-descended nodes and the number of messages passed in a mini-batch to be 85K and 1.5M respectively for fair comparisons among the sampling methods and our approach. VQ-GNN might require some small extra memory when provided with the same amount of nodes per batch, which is the cost to retain all the edges from the original graph. However, our VQ-GNN framework can effectively preserve all the edges connected to a mini-batch of nodes (i.e., never drop edges); see Fig. 1. Thus when we fix the number of messages passed per batch, our method can show significant memory efficiency compared with the sampling baselines. Fig. 4 shows the convergence comparison of various scalability methods, where we see VQ-GNN is superior in terms of the convergence speed with respect to the training time. When training GCN and SAGE-Mean on the ogbn-arxiv benchmark for a specific amount of time (e.g., 100 s), the validation performance of VQ-GNN is always the highest. The training time in Fig. 4 excludes the time for data loading, pre-processing, and validation set evaluation. Our VQ-GNN approach also leads to compelling inference speed-ups. Despite the training-efficiency issues of GNNs, conducting inference on large-scale graphs suffers from some unique challenges. According to our discussions in Section 5, and following the standard implementations provided by the Open Graph Benchmark (OGB) [5], the three sampling-based baselines (which share the same inference procedure) require all of the L-hop neighbors of the mini-batch nodes to lie on the device at once during the inference phase. The inference time of SAGE-Mean trained with sampling-methods on the ogbn-arxiv benchmark is 1.61 s, while our method can accelerate inference by an order of magnitude and reduce the inference time to 0.40 s. Performance comparison across various datasets, settings, and tasks. We validate the efficacy of our method on various benchmarks in Table 4. The four representative benchmarks are selected because they have very different types of datasets, settings, and tasks. The ogbn-arxiv benchmark is a common citation network of arXiv papers, while Reddit is a very dense social network of Reddit posts, which has much more features per node and larger average node degree; see Table 6 in Appendix F for detailed statistics of datasets. PPI is a node classification benchmark under the inductive learning setting, i.e., neither attributes nor connections of test nodes are present during training, while the other benchmarks are all transductive. VQ-GNN can be applied under the inductive setting with only one extra step: during the inference stage, we now need to find the codeword assignments (i.e., the nearest codeword) of the test nodes before making predictions since we have no access to the test nodes during training. Neither the learned codewords nor the GNN parameters are updated during inference. ogbl-collab is a link prediction benchmark where the labels and loss are intrinsically different. It is very challenging for a scalable method to perform well on all benchmarks. In Table 4, we confirm that VQ-GNN is more robust than the three sampling-based methods. Across the four benchmarks, VQ-GNN can always achieve performance similar with or better than the oracle “full-graph” training performance, while the other scalable algorithms may suffer from performance drop in some cases. For example, NS-SAGE fails when training GAT on ogbl-collab, Cluster-GCN consistently falls behind on PPI, and GraphSAINT-RW’s performance drops on the ogbl-collab when using SAGEMean and GAT backbones. We think the robust performance of VQ-GNN is its unique value among the many other scalable solutions. VQ-GNN framework is robust because it provides bounded approximations of “full-graph” training (Theorem 2 and Corollary 3), while most of the other scalable algorithms do not enjoy such a theoretical guarantee. VQ-GNN is also universal to various backbone models, including but not limited to GCN, SAGE-Mean, and GAT shown here; see Appendix G for more experiments on GNNs that utilize multi-hop neighborhoods and global context, e.g., graph transformers. 7 Conclusion Summary of our framework: strengths, weaknesses, future directions, and broader impacts. This paper introduced the proposed VQ-GNN framework, which can scale most state-of-the-art GNNs to large graphs through a principled and fundamentally different approach compared with samplingbased methods. We have shown both theoretically and experimentally that our approach is efficient in memory usage, training and inference time, and convergence speed. VQ-GNN can be universally applied to most GNN models and different graph learning tasks and can equivalently scale-up GNNs utilizing many-hops-away or global context for each layer. However, the performance of VQ-GNN relies on the quality of approximation provided by VQ. In practice, for VQ to work adequately in GNN, a set of techniques are necessary. Because of the limited time, we did not heuristically explore all possible techniques or optimize the VQ design. Given that our preliminary design of VQ in GNN already achieved competitive performance compared with the state-of-the-art sampling baselines, we hypothesize that further optimization of VQ design could improve performance. We hope our work opens up promising new avenues of research for scaling up GNNs, which also has the potential to be applied to other data domains wherever the size of a single sample is large, e.g., long time-series or videos. Considering broader impacts, we view our work mainly as a methodological and theoretical contribution, which paves the way for more resource-efficient graph representation learning. We envision our methodological innovations can enable more scalable ways to do large-network analysis for social good. However, progress in graph embedding learning might also trigger other hostile social network analyses, e.g., extracting fine-grained user interactions for social tracking. Acknowledgments and Disclosure of Funding Goldstein, Kong, and Chen were supported by the Office of Naval Research, AFOSR MURI program, the DARPA Young Faculty Award, and the National Science Foundation Division of Mathematical Sciences. Additional support was provided by Capital One Bank and JP Morgan Chase. Huang and Ding were supported by a startup fund from the Department of Computer Science of the University of Maryland, National Science Foundation IIS-1850220 CRII Award 030742-00001, DOD-DARPA-Defense Advanced Research Projects Agency Guaranteeing AI Robustness against Deception (GARD), Air Force Material Command, and Adobe, Capital One and JP Morgan faculty fellowships. Li and Dickerson were supported in part by NSF CAREER Award IIS-1846237, NSF D-ISN Award #2039862, NSF Award CCF-1852352, NIH R01 Award NLM-013039-01, NIST MSE Award #20126334, DARPA GARD #HR00112020007, DoD WHS Award #HQ003420F0035, ARPA-E Award #4334192 and a Google Faculty Research Award.
1. What is the main idea behind the proposed VQ-GNN framework? 2. How does the method reduce computational and memory costs while preserving full-batch results? 3. Can you explain the connection between VQ-GNN and Reformer architecture or locality-sensitive-hashing approximations in Transformers? 4. How does the performance of VQ-GNN compare to alternative scalable GNN training methods? 5. Can you provide experimental analysis on codebook/minibatch size and how it affects the framework's performance and computational costs? 6. Can you elaborate on the splitting feature vectors into small pieces aspect of the VQ-GNN algorithm and its importance? 7. How do whitening and Lipschitz regularization contribute to the success of VQ-GNN, and what limitations might there be regarding their use? 8. Could you restructure the paper to make practical details like product VQ more apparent in the main body?
Summary Of The Paper Review
Summary Of The Paper This paper presents VQ-GNN: a scalable framework for training graph neural networks that approximates full-graph message passing with a memory of global notes that is updated during training. The authors derive approximate forward and backward passes and show that the network can preserve full-batch results while reducing computational and memory costs. Review I think the general idea of approximating the general form of convolutional matrices with a low-rank decomposition is good. Is there a connection to the Reformer architecture (Kitaev, Kaiser et al 2020) and locality-sensity-hashing approximations to full attention matrices in Transformers? The proposed method demonstrates substantial gains on link-prediction tasks relative to alternative scalable GNN training methods. The performance (accuracy / hits) is similar - and in some cases better - than the full-graph algorithm. Combined with the memory and training / inference speed gains, it makes for a compelling set of results. I like the analysis of the fundamental scalability limitations of other ‘scalable’ GNN training algorithms. The highlight is the futility of managing the exponential receptive field dependence on the number of layers by dropping nodes or layers. In general the paper contains extensive theoretical results, and analyses of the method’s complexity relative to alternatives. It would be useful to see some experimental analysis on codebook / minibatch size? For instance a plot of model performance / computational costs as a function of the minibatch or codebook size. Otherwise it’s not clear how sensitive the framework is to these factors. It seems like a potentially important bit of experimental detail is added in the appendix [L641-646]: “To mitigate the error induced by VQ in the high-dimensional space, we split feature vectors into small pieces. In practice, we find that when the dimension of each piece is 4, our algorithm generally works well. When the split dimension is 4 we have 32 separate branches each layer to do the VQ. These branches are independent and can be paralleled. At the end of each layer, separated feature vectors are concatenated together to restore the original hidden dimension, and the restored feature is input to the next layer I found it a bit hard to understand this. Are the 4-dimensional vector pieces independently assigned to codebooks? If this is an important component then it would be useful to report some analysis in the paper. And the same for the other VQ-details: whitening and Lipshitz regularization. If the model is highly sensitive to these elements then that is a limitation. Overall, the proposed VQ-GNN framework is well motivated, well-executed and has promising results. Some additional analysis would be welcome, and potentially a restructuring so that some of the important practical details (product VQ etc) are made more apparent in the main body.
NIPS
Title VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization Abstract Most state-of-the-art Graph Neural Networks (GNNs) can be defined as a form of graph convolution which can be realized by message passing between direct neighbors or beyond. To scale such GNNs to large graphs, various neighbor-, layer-, or subgraph-sampling techniques are proposed to alleviate the “neighbor explosion” problem by considering only a small subset of messages passed to the nodes in a mini-batch. However, sampling-based methods are difficult to apply to GNNs that utilize many-hops-away or global context each layer, show unstable performance for different tasks and datasets, and do not speed up model inference. We propose a principled and fundamentally different approach, VQ-GNN, a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance. In contrast to sampling-based techniques, our approach can effectively preserve all the messages passed to a mini-batch of nodes by learning and updating a small number of quantized reference vectors of global node representations, using VQ within each GNN layer. Our framework avoids the “neighbor explosion” problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix. We show that such a compact low-rank version of the gigantic convolution matrix is sufficient both theoretically and experimentally. In company with VQ, we design a novel approximated message passing algorithm and a nontrivial back-propagation rule for our framework. Experiments on various types of GNN backbones demonstrate the scalability and competitive performance of our framework on large-graph node classification and link prediction benchmarks. 1 Introduction The rise of Graph Neural Networks (GNNs) has brought the modeling of complex graph data into a new era. Using message-passing, GNNs iteratively share information between neighbors in a graph to make predictions of node labels, edge labels, or graph-level properties. A number of powerful GNN architectures [1–4] have been widely applied to solve down-stream tasks such as recommendation, social analysis, visual recognition, etc. With the soaring size of realistic graph datasets and the industrial need to model them efficiently, GNNs are hindered by a scalability problem. An L-layer GNN aggregates information from all L-hop neighbors, and standard training routines require these neighbors to all lie on the GPU at once. This prohibits full-batch training when facing a graph with millions of nodes [5]. ∗Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). A number of sampling-based methods have been proposed to accommodate large graphs with limited GPU resources. These techniques can be broadly classified into three categories: (1) Neighborsampling methods [2, 6] sample a fixed-number of neighbors for each node; (2) Layer-sampling methods [7, 8] sample nodes in each layer independently with a constant sample size; (3) Subgraphsampling methods [9, 10] sample a subgraph for each mini-batch and perform forward and backpropagation on the same subgraph across all layers. Although these sampling-based methods may significantly speed up the training time of GNNs, they suffer from the following three major drawbacks: (1) At inference phase, sampling methods require all the neighbors to draw non-stochastic predictions, resulting in expensive predictions if the full graph cannot be fit on the inference device; (2) As reported in [5] and in Section 6, state-of-the-art sampling-baselines fail to achieve satisfactory results consistently across various tasks and datasets; (3) Sampling-based methods cannot be universally applied to GNNs that utilize many-hop or global context in each layer, which hinders the application of more powerful GNNs to large graphs. This paper presents VQ-GNN, a GNN framework using vector quantization to scale most state-ofthe-art GNNs to large graphs through a principled and fundamentally different approach compared with the sampling-based methods. We explore the idea of using vector quantization (VQ) as a means of dimensionality reduction to learn and update a small number of quantized reference vectors (codewords) of global node representations. In VQ-GNN, mini-batch message passing in each GNN layer is approximated by a VQ codebook update and an approximated form of message passing between the mini-batch of nodes and codewords; see Fig. 1. Our approach avoids the “neighbor explosion” problem and enables mini-batch training and inference of GNNs. In contrast to samplingbased techniques, VQ-GNN can effectively preserve all the messages passed to a mini-batch of nodes. We theoretically and experimentally show that our approach is efficient in terms of memory usage, training/inference time, and convergence speed. Experiments on various GNN backbones demonstrate the competitive performance of our framework compared with the full-graph training baseline and sampling-based scalable algorithms. Paper organization. The remainder of this paper is organized as follows. Section 2 summarizes GNNs that can be re-formulated into a common framework of graph convolution. Section 3 defines the scalability challenge of GNNs and shows that dimensionality reduction is a potential solution. In Section 4, we describe our approach, VQ-GNN, from theoretical framework to algorithm design and explain why it solves the scalability issue of most GNNs. Section 5 compares our approach to the sampling-based methods. Section 6 presents a series of experiments that validate the efficiency, robustness, and universality of VQ-GNN. Finally, Section 7 concludes this paper with a summary of limitations and broader impacts. 2 Preliminaries: GNNs defined as Graph Convolution Notations. Consider a graph with n nodes and m edges (average degree d = m/n). Connectivity is given by the adjacency matrix A ∈ {0, 1}n×n and features are defined on nodes by X ∈ Rn×f0 with f0 the length of feature vectors. Given a matrix C, let Ci,j , Ci,:, and C:,j denote its (i, j)-th entry, i-th row, j-th column, respectively. For a finite sequence 〈ib〉 : i1, . . . , ib, we use C〈ib〉,: to denote the matrix whose rows are the ib-th rows of matrix C. We use to denote the element-wise (Hadamard) product. ‖ · ‖p denotes the entry-wise `p norm of a vector and ‖ · ‖F denotes the Frobenius norm. We use In ∈ Rn×n to denote the identity matrix, 1n ∈ Rn to denote the vector whose entries are all ones, and ein to denote the unit vector in Rn whose i-th entry is 1. The 0-1 indicator function is 1{·}. We use diag(c) to denote a diagonal matrix whose diagonal entries are from vector c. Andf represents concatenation along the last axis. We use superscripts to refer to different copies of same kind of variable. For example, X(l) ∈ Rn×fl denotes node representations on layer l. A Graph Neural Network (GNN) layer takes the node representation of a previous layer X(l) as input and produces a new representation X(l+1), where X = X(0) is the input features. A common framework for generalized graph convolution. Although many GNNs are designed following different guiding principles including neighborhood aggregation (GraphSAGE [2], PNA [11]), spatial convolution (GCN [1]), spectral filtering (ChebNet [12], CayleyNet [13], ARMA [14]), self-attention (GAT [3], Graph Transformers [15–17]), diffusion (GDC [18], DCNN [19]), WeisfeilerLehman (WL) alignment (GIN [4], 3WL-GNNs [20, 21]), or other graph algorithms ([22, 23]). Despite these differences, nearly all GNNs can be interpreted as performing message passing on node features, followed by feature transformation and an activation function. As pointed out by Balcilar et al. [24], GNNs can typically be written in the form X(l+1) = σ (∑ s C(s)X(l)W (l,s) ) , (1) where C(s) ∈ Rn×n denotes the s-th convolution matrix that defines the message passing operator, s ∈ Z+ denotes index of convolution, and σ(·) denotes the non-linearity. W (l,s) ∈ Rfl×fl+1 is the learnable linear weight matrix for the l-th layer and s-th filter. Within this common framework, GNNs differ from each other by choice of convolution matrices C(s), which can be either fixed or learnable. A learnable convolution matrix relies on the inputs and learnable parameters and can be different in each layer (thus denoted as C(l,s)): C (l,s) i,j = C (s) i,j︸︷︷︸ fixed ·h(s) θ(l,s) (X (l) i,: , X (l) j,: )︸ ︷︷ ︸ learnable (2) where C(s) denotes the fixed mask of the s-th learnable convolution, which may depend on the adjacency matrix A and input edge features Ei,j . While h(s)(·, ·) : Rfl × Rfl → R can be any learnable model parametrized by θ(l,s). Sometimes a learnable convolution matrix may be further row-wise normalized as C(l,s)i,j ← C (l,s) i,j / ∑ j C (l,s) i,j , for example in GAT [3]. We stick to Eq. (2) in the main paper and discuss row-wise normalization in Appendices A and E. The receptive field of a layer of graph convolution (Eq. (1)) is defined as a set of nodesR1i whose features {X (l) j,: | j ∈ Ri} determines X(l+1)i,: . We re-formulate some popular GNNs into this generalized graph convolution framework; see Table 1 and Appendix A for more. The back-propagation rule of GNNs defined by Eq. (1) is as follows, ∇X(l)` = ∑ s ( C(l,s) )T( ∇X(l+1)` σ′ ( σ−1 ( X(l+1) )))( W (l,s) )T , (3) which can also be understood as a form of message passing. σ′ and σ−1 are the derivative and inverse of σ respectively and ∇X(l+1)` σ′ ( σ−1(X(l+1)) ) is the gradients back-propagated through the non-linearity. 3 Scalability Problem and Theoretical Framework When a graph is large, we are forced to mini-batch the graph by sampling a subset of b n nodes in each iteration. Say the node indices are i1, . . . , ib and a mini-batch of node features is denoted by XB = X〈ib〉,:. To mini-batch efficiently for any model, we hope to fetch Θ(b) information to the training device, spend Θ(Lb) training time per iteration while taking (n/b) iterations to traverse through the entire dataset. However, it is intrinsically difficult for most of the GNNs to meet these three scalability requirements at the same time. The receptive field of L layers of graph convolution (Eq. (1)) is recursively given by RLi = ⋃ j∈R1i RL−1j (starting with R1i ⊇ {i} ∪ Ni), and its size grows exponentially with L. Thus, to optimize on a mini-batch of b nodes, we require Ω(bdL) inputs and training time per iteration. Sampling a subset of neighbors [2, 6] for each node in each layer does not change the exponential dependence on L. Although layer- [7, 25] and subgraph-sampling [9, 10] may require only Ω(b) inputs and Ω(Lb) training time per iteration, they are only able to consider an exponentially small proportion of messages compared with full-graph training. Most importantly, all existing sampling methods do not support dense convolution matrices with O(n2) non-zero terms. Please see Section 5 for a detailed comparison with sampling-based scalable methods after we introduce our framework. Idea of dimensionality reduction. We aim to develop a scalable algorithm for any GNN models that can be re-formulated as Eq. (1), where the convolution matrix can be either fixed or learnable, and either sparse or dense. The major obstacle to scalability is that, for each layer of graph convolution, to compute a mini-batch of forward-passed features X(l+1)B = X (l+1) 〈ib〉,: , we need O(n) entries of C (l,s) B = C (l,s) 〈ib〉,: and X (l), which will not fit in device memory. Our goal is to apply a dimensionality reduction to both convolution and node feature matrices, and then apply convolution using compressed “sketches” of C(l,s)B and X (l). More specifically, we look for a projection matrix R ∈ Rn×k with k n, such that the product of low-dimensional sketches C̃ (l,s) B = C (l,s) B R ∈ Rb×k and X̃(l) = RTX(l) ∈ Rk×fl is approximately the same as C (l,s) B X (l). The approximated product (of all nodes) C̃(l,s)X̃(l) = C(l,s)RRTX(l) can also be regarded as the result of using a low-rank approximation C(l,s)RRT ∈ Rn×n of the convolution matrix such that rank ( C(l,s)RRT ) ≤ k. The distributional Johnson–Lindenstrauss lemma [26] (JL for short) shows the existence of such projectionR withm = Θ(log(n)), and the following result by Kane and Nelson [27] shows that R can be chosen to quite sparse: Theorem 1. For any convolution matrix C ∈ Rn×n, any column vector X:,a ∈ Rn of the node feature matrix X ∈ Rn×f (where a = 1, . . . , f ) and ε > 0, there exists a projection matrix R ∈ Rn×k (drawn from a distribution) with only an O(ε)-fraction of entries non-zero, such that Pr ( ‖CRRTX:,a − CX:,a‖2 < ε‖CX:,a‖2 ) > 1− δ, (4) with k = Θ(log(n)/ε2) and δ = O(1/n). Now, the sketches C̃(l,s)B and X̃ (l) take up O(b log(n)) and Θ(fl log(n)) memory respectively and can fit into the training and inference device. The sparsity of projection matrix R is favorable because:(1) if the convolution matrix C(l,s) is sparse (e.g., direct-neighbor message passing where only O(d/n)-fraction of entries are non-zero), only an O(εd)-fraction of entries are non-zero in the sketch C̃(l,s); (2) During training, X̃(l) is updated in a “streaming” fashion using each minibatch’s inputs X(l)B , and a sparse R reduces the computation time by a factor of O(ε). However, the projection R produced following the sparse JL-lemma [27] is randomized and requires O(log2(n)) uniform random bits to sample. It is difficult to combine this with the deterministic feed-forward and back-propagation rules of neural networks, and there is no clue when and how we should update the projection matrix. Moreover, randomized projections destroy the “identity” of each node, and for learnable convolution matrices (Eq. (2)), it is impossible to compute the convolution matrix only using the sketch of features X̃(l). For this idea to be useful, we need a deterministic and identity-preserving construction of the projection matrix R ∈ Rn×k to avoid these added complexities. 4 Proposed Method: Vector Quantized GNN Dimensionality reduction using Vector Quantization (VQ). A natural and widely-used method to reduce the dimensionality of data in a deterministic and identity-preserving manner is Vector Quantization [28] (VQ), a classical data compression algorithm that can be formulated as the following optimization problem: min R∈{0,1}n×k , X̃∈Rk×f ‖X −RX̃‖F s.t. Ri,: ∈ {e1k, . . . , ekk}, (5) which is classically solved via k-means [28]. Here the sketch of features X̃ is called the feature “codewords.” R is called the codeword assignment matrix, whose rows are unit vectors in Rk, i.e., Ri,v = 1 if and only if the i-th node is assigned to the v-th cluster in k-means. The objective in Eq. (5) is called Within-Cluster Sum of Squares (WCSS), and we can define the relative error of VQ as = ‖X − RX̃‖F /‖X‖F . The rows of X̃ are the k codewords (i.e., centroids in k-means), and can be computed as X̃ = diag−1(RT1n)RTX , which is slightly different from the definition in Section 3 as a row-wise normalization of RT is required. The sketch of the convolution matrix C̃ can still be computed as C̃ = CR. In general, VQ provides us a principled framework to learn the low-dimensional sketches X̃ and C̃, in a deterministic and node-identity-preserving manner. However, to enable mini-batch training and inference of GNNs using VQ, three more questions need to be answered: • How to approximate the forward-passed mini-batch features of nodes using the learned codewords? • How to back-propagate through VQ and estimate the mini-batch gradients of nodes? • How to update the codewords and assignment matrix along with the training of GNN? In the following part of this section, we introduce the VQ-GNN algorithm by answering all the three questions and presenting a scalability analysis. Approximated forward and backward message passing. To approximate the forward pass through a GNN layer (Eq. (1)) with a mini-batch of nodes 〈ib〉, we can divide the messages into two categories: intra-mini-batch messages, and messages from out-of-mini-batch nodes; see the right figure of Fig. 1. Intra-mini-batch messages C(l,s)in X (l) B can always be computed exactly, where C(l,s)in = (C (l,s) B ):,〈ib〉 ∈ Rb×b, because they only rely on the previous layer’s node features of the current mini-batch. Equipped with the codewords X̃(l) and the codeword assignment of all nodes R(l), we can approximate the messages from out-of-mini-batch nodes as C̃ (l,s) out X̃ (l), where X̃(l) = diag−1(RT1n)RTX(l) as defined above and C̃ (l,s) out = C (l,s) out R. Here, C (l,s) out is the remaining part of the convolution matrix after removing the intra-mini-batch messages, thus (C(l,s)out ):,j = (C (l,s) B ):,j1{j ∈ 〈ib〉} for any j ∈ {1, . . . , n}, and C̃(l,s) is the sketch of C (l,s) out . In general, we can easily approximate the forward-passed mini-batch features X (l+1) B by X̂ (l+1) B = σ (∑ s(C (l,s) in X (l) B + C̃ (l,s) out X̃ (l))W (l,s) ) . However, the above construction of X̂(l+1)B does not allow us to back-propagate through VQ straightforwardly using chain rules. During back-propagation, we aim at approximating the previous layer’s mini-batch gradients ∇ X (l) B ` given the gradients of the (approximated) output ∇ X̂ (l+1) B ` (Eq. (3)). Firstly, we do not know how to compute the partial derivative of C̃(l,s)out and X̃ (l) with respect to X(l)B , because the learning and updating of VQ codewords and assignment are data dependent and are usually realized by an iterative optimization algorithm. Thus, we need to go through an iterative computation graph to evaluate the partial derivative of R(l) with respect to X(l)B , which requires access to many historical features and gradients, thus violating the scalability constraints. Secondly, even if we apply some very rough approximation during back-propagation as in [29], that is, assuming that the partial derivative of R(l) with respect to X(l)B can be ignored (i.e., the codeword assignment matrix is detached from the computation graph, known as “straight through” back-propagation), we are not able to evaluate the derivatives of codewords X̃(l) because they rely on some node features out of the current mini-batch and are not in the training device. Generally speaking, designing a back-propagation rule for VQ under the mini-batch training setup is a challenging new problem. It is helpful to re-examine what is happening when we back-propagate on the full graph. In Section 2, we see that back-propagation of a layer of convolution-based GNN can also be realized by message passing (Eq. (3)). In Fig. 2, we show the messages related to a mini-batch of nodes can be classified into three types. The “green” and “red” messages are the intra-mini-batch messages and the messages from out-of-mini-batch nodes, respectively. Apart from them, although the “blue” messages to out-of-mini-batch nodes do not contribute to the forward-passed mini-batch features, they are used during back-propagation and are an important part of the back-propagated mini-batch gradients. Since both forward-pass and back-propagation can be realized by message passing, can we approximate the back-propagated mini-batch gradients ∇ X (l) B ` in a symmetric manner? We can introduce a set of gradient codewords G̃(l+1) = diag−1(RT1n)RTG(l+1) using the same assignment matrix, where G(l+1) = ∇X̂(l+1)` σ ′(σ−1(X(l+1))) is the gradients back-propagated through non-linearity. Each gradient codeword corresponds one-to-one with a feature codeword since we want to use only one assignment matrix R. Each pair of codewords are concatenated together during VQ updates. Following this idea, we define the approximated forward and backward message passing as follows:[ X̂ (l+1) B • ] = σ (∑ s [ C (l,s) in C̃ (l,s) out (C̃(l,s)T)out 0 ] ︸ ︷︷ ︸ approx. message passing weight matrix C(l,s) [ X (l) B X̃(l) ] ︸ ︷︷ ︸ mini-batch features and feat. codewords W (l,s) ) , (6) ∇̂X(l)B ` • =∑ s ( C (l,s) )T [G(l+1)B G̃(l+1) ] ︸ ︷︷ ︸ mini-batch gradients and grad. codewords ( W (l,s) )T , (7) where C (l,s) ∈ R(b+m)×(b+m) is the approximated message passing weight matrix and is shared during the forward-pass and back-propagation process. The lower halves of the left-hand side vectors of Eqs. (6) and (7) are used in neither the forward nor the backward calculations and are never calculated during training or inference. The approximated forward and backward message passing enables the end-to-end mini-batch training and inference of GNNs and is the core of our VQ-GNN framework. Error-bounds on estimated features and gradients. We can effectively upper bound the estimation errors of mini-batch features and gradients using the relative error of VQ under some mild conditions. For ease of presentation, we assume the GNN has only one convolution matrix in the following theorems. Theorem 2. If the VQ relative error of l-th layer is (l), the convolution matrix C(l) is either fixed or learnable with the Lipschitz constant of hθ(l)(·) : R2fl → R upper-bounded by Lip(hθ(l)), and the Lipschitz constant of the non-linearity is Lip(σ), then the estimation error of forward-passed mini-batch features satisfies, ‖X̂(l+1)B −X (l+1) B ‖F ≤ (l) · (1 +O(Lip(hθ(l))))Lip(σ)‖C(l)‖F ‖X(l)‖F ‖W (l)‖F . (8) Corollary 3. If the conditions in Theorem 2 hold and the non-linearity satisfies |σ′(z)| ≤ σ′max for any z ∈ R, then the estimation error of back-propagated mini-batch gradients satisfies, ‖∇̂ X (l) B `−∇ X (l) B `‖F ≤ (l) · (1 +O(Lip(hθ(l)))σ′max‖C(l)‖F ‖∇X(l+1)`‖F ‖W (l)‖F . (9) Note that the error bounds rely on the Lipschitz constant of h(·) when the convolution matrix is learnable. In practice, we can Lipshitz regularize GNNs like GAT [3] without affecting their performance; see Appendix E. VQ-GNN: the complete algorithm and analysis of scalability. The only remaining question is how to update the learned codewords and assignments during training? In this paper, we use the VQ update rule proposed in [29], which updates the codewords as exponential moving averages of the mSeini-batch inputs; see Appendix E for the detailed algorithm. We find such an exponential moving average technique suits us well for the mini-batch training of GNNs and resembles the online k-means algorithm. See Fig. 3 for the schematic diagram of VQ-GNN, and the complete pseudo-code is in Appendix E. With VQ-GNN, we can mini-batch train and perform inference on large graphs using GNNs, just like a regular neural network (e.g., MLP). We have to maintain a small codebook of k codewords and update it for each iteration, which takes an extra O(Lkf) memory and O(Lnkf) training time per epoch, where L and f are the numbers of layers and (hidden) features of the GNN respectively. We can effectively preserve all messages related to a mini-batch while randomly sampling nodes from the graph. The number of intra-mini-batch messages is O(b2d/n) when the nodes are sampled randomly. Thus we only need to pass O(b2d/n + bk) messages per iteration and O(bd + nk) per epoch. In practice, when combined with techniques including product VQ and implicit whitening (see Appendix E), we can further improve the stability and performance of VQ-GNN. These theoretical and experimental analyses justify the efficiency of the proposed VQ-GNN framework. 5 Related Work In this section, we review some of the recent scalable GNN methods and analyze their theoretical memory and time complexities, with a focus on scalable algorithms that can be universally applied to a variety of GNN models (like our VQ-GNN framework), including NS-SAGE2 [2], Cluster-GCN [9], and GraphSAINT [10]. We consider GCN here as the simplest benchmark. For a GCN with L layers and f -dimensional (hidden) features in each layer, when applied to a sparse graph with n nodes and m edges (i.e., average degree d = m/n) for “full-graph” training and inference: the memory usage is O(Lnf + Lf2) and the training/inference time is O(Lmf + Lnf2). We further assume the graph is large and consider the training and inference device memory is O(b) where b is the mini-batch 2We call the neighbor sampling method in [2] NS-SAGE and the GNN model in the same paper SAGE-Mean to avoid ambiguity. size (i.e., the memory bottleneck limits the mini-batch size), and generally d b n m holds. We divide sampling baselines into three categories, and the complexities of selected methods are in Table 2. See Appendix D for more related work discussions. Neighbor-sampling. Neighbor sampling scheme chooses a subset of neighbors in each layer to reduce the amount of message passing required. NS-SAGE [2] samples r neighbors for each node and only aggregates the messages from the sampled node. For a GNN with L layers, O(brL) nodes are sampled in a mini-batch, which leads to the complexities growing exponentially with the number of layers L; see Table 2. Therefore, NS-SAGE is not scalable on large graphs for a model with an arbitrary number of layers. NS-SAGE requires all the neighbors to draw non-stochastic predictions in the inference phase, resulting in a O(dL) inference time since we cannot fit O(n) nodes all at once to the device. VR-GCN [6] proposes a variance reduction technique to further reduce the size r of sampled neighbors. However, VR-GCN requires a O(Lnf) side memory of all the nodes’ hidden features and suffers from this added memory complexity. Layer-sampling. These methods perform node sampling independently in each layer, which results in a constant sample size across all layers and limits the exponential expansion of neighbor size. FastGCN [7] applies importance sampling to reduce variance. Adapt [25] improves FastGCN by an additional sampling network but also incurs the significant overhead of the sampling algorithm. Subgraph-sampling. Some proposed schemes sample a subgraph for each mini-batch and perform forward and backward passes on the same subgraph across all layers. Cluster-GCN [9] partitions a large graph into several densely connected subgraphs and samples a subset of subgraphs (with edges between clusters added back) to train in each mini-batch. Cluster-GCN requires O(m) precomputation time and O(bd) time to recover the intra-cluster edges when loading each mini-batch. GraphSAINT [10] samples a set of nodes and takes the induced subgraph for mini-batch training. We consider the best-performing variant, GraphSAINT-RW, which uses L steps of random walk to induce subgraph from b randomly sampled nodes. O(Lb) nodes and edges are covered in each of the n/b mini-batches. Although O(Ln) nodes are sampled with some repetition in an epoch, the number of edges covered (i.e., messages considered in each layer of a GNN) is also O(Ln) and is usually much smaller than m. GraphSAINT-Node, which randomly samples nodes for each mini-batch, does not suffer from this L factor in the complexities. However, its performance is worse than GraphSAINT-RW’s. Like NS-SAGE and some other sampling methods, Cluster-GCN and GraphSAINT-RW cannot draw predictions on a randomly sampled subgraph in the inference phase. Thus they suffer from the same O(dL) inference time complexity as NS-SAGE; see Table 2. 6 Experiments In this section, we verify the efficiency, robustness, and universality of VQ-GNN using a series of experiments. See Appendix F for implementation details and Appendix G for ablation studies and more experiments. Scalability and efficiency: memory usage, convergence, training and inference time. We summarize the memory usage of scalable methods and our VQ-GNN framework in Table 3. Based on the implementations of the PyG library [30], memory consumption of GNN models usually grows linearly with respect to both the number of nodes and the number of edges in a mini-batch. On the ogbn-arxiv benchmark, we fix the number of gradient-descended nodes and the number of messages passed in a mini-batch to be 85K and 1.5M respectively for fair comparisons among the sampling methods and our approach. VQ-GNN might require some small extra memory when provided with the same amount of nodes per batch, which is the cost to retain all the edges from the original graph. However, our VQ-GNN framework can effectively preserve all the edges connected to a mini-batch of nodes (i.e., never drop edges); see Fig. 1. Thus when we fix the number of messages passed per batch, our method can show significant memory efficiency compared with the sampling baselines. Fig. 4 shows the convergence comparison of various scalability methods, where we see VQ-GNN is superior in terms of the convergence speed with respect to the training time. When training GCN and SAGE-Mean on the ogbn-arxiv benchmark for a specific amount of time (e.g., 100 s), the validation performance of VQ-GNN is always the highest. The training time in Fig. 4 excludes the time for data loading, pre-processing, and validation set evaluation. Our VQ-GNN approach also leads to compelling inference speed-ups. Despite the training-efficiency issues of GNNs, conducting inference on large-scale graphs suffers from some unique challenges. According to our discussions in Section 5, and following the standard implementations provided by the Open Graph Benchmark (OGB) [5], the three sampling-based baselines (which share the same inference procedure) require all of the L-hop neighbors of the mini-batch nodes to lie on the device at once during the inference phase. The inference time of SAGE-Mean trained with sampling-methods on the ogbn-arxiv benchmark is 1.61 s, while our method can accelerate inference by an order of magnitude and reduce the inference time to 0.40 s. Performance comparison across various datasets, settings, and tasks. We validate the efficacy of our method on various benchmarks in Table 4. The four representative benchmarks are selected because they have very different types of datasets, settings, and tasks. The ogbn-arxiv benchmark is a common citation network of arXiv papers, while Reddit is a very dense social network of Reddit posts, which has much more features per node and larger average node degree; see Table 6 in Appendix F for detailed statistics of datasets. PPI is a node classification benchmark under the inductive learning setting, i.e., neither attributes nor connections of test nodes are present during training, while the other benchmarks are all transductive. VQ-GNN can be applied under the inductive setting with only one extra step: during the inference stage, we now need to find the codeword assignments (i.e., the nearest codeword) of the test nodes before making predictions since we have no access to the test nodes during training. Neither the learned codewords nor the GNN parameters are updated during inference. ogbl-collab is a link prediction benchmark where the labels and loss are intrinsically different. It is very challenging for a scalable method to perform well on all benchmarks. In Table 4, we confirm that VQ-GNN is more robust than the three sampling-based methods. Across the four benchmarks, VQ-GNN can always achieve performance similar with or better than the oracle “full-graph” training performance, while the other scalable algorithms may suffer from performance drop in some cases. For example, NS-SAGE fails when training GAT on ogbl-collab, Cluster-GCN consistently falls behind on PPI, and GraphSAINT-RW’s performance drops on the ogbl-collab when using SAGEMean and GAT backbones. We think the robust performance of VQ-GNN is its unique value among the many other scalable solutions. VQ-GNN framework is robust because it provides bounded approximations of “full-graph” training (Theorem 2 and Corollary 3), while most of the other scalable algorithms do not enjoy such a theoretical guarantee. VQ-GNN is also universal to various backbone models, including but not limited to GCN, SAGE-Mean, and GAT shown here; see Appendix G for more experiments on GNNs that utilize multi-hop neighborhoods and global context, e.g., graph transformers. 7 Conclusion Summary of our framework: strengths, weaknesses, future directions, and broader impacts. This paper introduced the proposed VQ-GNN framework, which can scale most state-of-the-art GNNs to large graphs through a principled and fundamentally different approach compared with samplingbased methods. We have shown both theoretically and experimentally that our approach is efficient in memory usage, training and inference time, and convergence speed. VQ-GNN can be universally applied to most GNN models and different graph learning tasks and can equivalently scale-up GNNs utilizing many-hops-away or global context for each layer. However, the performance of VQ-GNN relies on the quality of approximation provided by VQ. In practice, for VQ to work adequately in GNN, a set of techniques are necessary. Because of the limited time, we did not heuristically explore all possible techniques or optimize the VQ design. Given that our preliminary design of VQ in GNN already achieved competitive performance compared with the state-of-the-art sampling baselines, we hypothesize that further optimization of VQ design could improve performance. We hope our work opens up promising new avenues of research for scaling up GNNs, which also has the potential to be applied to other data domains wherever the size of a single sample is large, e.g., long time-series or videos. Considering broader impacts, we view our work mainly as a methodological and theoretical contribution, which paves the way for more resource-efficient graph representation learning. We envision our methodological innovations can enable more scalable ways to do large-network analysis for social good. However, progress in graph embedding learning might also trigger other hostile social network analyses, e.g., extracting fine-grained user interactions for social tracking. Acknowledgments and Disclosure of Funding Goldstein, Kong, and Chen were supported by the Office of Naval Research, AFOSR MURI program, the DARPA Young Faculty Award, and the National Science Foundation Division of Mathematical Sciences. Additional support was provided by Capital One Bank and JP Morgan Chase. Huang and Ding were supported by a startup fund from the Department of Computer Science of the University of Maryland, National Science Foundation IIS-1850220 CRII Award 030742-00001, DOD-DARPA-Defense Advanced Research Projects Agency Guaranteeing AI Robustness against Deception (GARD), Air Force Material Command, and Adobe, Capital One and JP Morgan faculty fellowships. Li and Dickerson were supported in part by NSF CAREER Award IIS-1846237, NSF D-ISN Award #2039862, NSF Award CCF-1852352, NIH R01 Award NLM-013039-01, NIST MSE Award #20126334, DARPA GARD #HR00112020007, DoD WHS Award #HQ003420F0035, ARPA-E Award #4334192 and a Google Faculty Research Award.
1. What is the main contribution of the paper in reducing memory consumption for scalable GNNs? 2. What are the strengths of the proposed approach, particularly in its application of dimensionality reduction techniques? 3. What are the weaknesses of the paper regarding its notations, explanations, and experiment scope? 4. How does the reviewer assess the overall impact of the paper, both theoretically and experimentally?
Summary Of The Paper Review
Summary Of The Paper The paper presents a vector quantization method to reduce the memory consumption of the convolutional operations for scalable GNNs. The method avoids subsampling nodes in the mini batches to prevent all the messages could be kept in the message passing operations. It quantizes the convolutional matrices leveraging the dimensionality reduction method. The approximated message passing with help of the quantization codewords is shown to have low relative error theoretically under certain conditions and have comparable accuracy with the original algorithm empirically. Review Positive points: The motivation and challenges are well-presented and related work seems to be adequate. I like the idea of applying dimensionality reduction technique to quantize the feature and convolutional matrices in the GNNs. The experimental results seem convincing, though limited experiments are provided. Comments: 1. The notations and explanations are not very clear, especially in the approximated forward and backward message passing part of section 4. 2. I would prefer to see more experiments and discussion on the hyperparameters selection and their affects to the algorithm such as the selection of the reduced dimension the mini-batch sizes, and the effect of different mini-batch sampling strategies. During the rebuttal phase, the authors add additional experiments following the suggestions and I'm satisfied with the feedback and additional clarification given by the author. Considering the theoretical/experimental impact of the paper, I'd keep my rating towards marginal acceptance.
NIPS
Title VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization Abstract Most state-of-the-art Graph Neural Networks (GNNs) can be defined as a form of graph convolution which can be realized by message passing between direct neighbors or beyond. To scale such GNNs to large graphs, various neighbor-, layer-, or subgraph-sampling techniques are proposed to alleviate the “neighbor explosion” problem by considering only a small subset of messages passed to the nodes in a mini-batch. However, sampling-based methods are difficult to apply to GNNs that utilize many-hops-away or global context each layer, show unstable performance for different tasks and datasets, and do not speed up model inference. We propose a principled and fundamentally different approach, VQ-GNN, a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance. In contrast to sampling-based techniques, our approach can effectively preserve all the messages passed to a mini-batch of nodes by learning and updating a small number of quantized reference vectors of global node representations, using VQ within each GNN layer. Our framework avoids the “neighbor explosion” problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix. We show that such a compact low-rank version of the gigantic convolution matrix is sufficient both theoretically and experimentally. In company with VQ, we design a novel approximated message passing algorithm and a nontrivial back-propagation rule for our framework. Experiments on various types of GNN backbones demonstrate the scalability and competitive performance of our framework on large-graph node classification and link prediction benchmarks. 1 Introduction The rise of Graph Neural Networks (GNNs) has brought the modeling of complex graph data into a new era. Using message-passing, GNNs iteratively share information between neighbors in a graph to make predictions of node labels, edge labels, or graph-level properties. A number of powerful GNN architectures [1–4] have been widely applied to solve down-stream tasks such as recommendation, social analysis, visual recognition, etc. With the soaring size of realistic graph datasets and the industrial need to model them efficiently, GNNs are hindered by a scalability problem. An L-layer GNN aggregates information from all L-hop neighbors, and standard training routines require these neighbors to all lie on the GPU at once. This prohibits full-batch training when facing a graph with millions of nodes [5]. ∗Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). A number of sampling-based methods have been proposed to accommodate large graphs with limited GPU resources. These techniques can be broadly classified into three categories: (1) Neighborsampling methods [2, 6] sample a fixed-number of neighbors for each node; (2) Layer-sampling methods [7, 8] sample nodes in each layer independently with a constant sample size; (3) Subgraphsampling methods [9, 10] sample a subgraph for each mini-batch and perform forward and backpropagation on the same subgraph across all layers. Although these sampling-based methods may significantly speed up the training time of GNNs, they suffer from the following three major drawbacks: (1) At inference phase, sampling methods require all the neighbors to draw non-stochastic predictions, resulting in expensive predictions if the full graph cannot be fit on the inference device; (2) As reported in [5] and in Section 6, state-of-the-art sampling-baselines fail to achieve satisfactory results consistently across various tasks and datasets; (3) Sampling-based methods cannot be universally applied to GNNs that utilize many-hop or global context in each layer, which hinders the application of more powerful GNNs to large graphs. This paper presents VQ-GNN, a GNN framework using vector quantization to scale most state-ofthe-art GNNs to large graphs through a principled and fundamentally different approach compared with the sampling-based methods. We explore the idea of using vector quantization (VQ) as a means of dimensionality reduction to learn and update a small number of quantized reference vectors (codewords) of global node representations. In VQ-GNN, mini-batch message passing in each GNN layer is approximated by a VQ codebook update and an approximated form of message passing between the mini-batch of nodes and codewords; see Fig. 1. Our approach avoids the “neighbor explosion” problem and enables mini-batch training and inference of GNNs. In contrast to samplingbased techniques, VQ-GNN can effectively preserve all the messages passed to a mini-batch of nodes. We theoretically and experimentally show that our approach is efficient in terms of memory usage, training/inference time, and convergence speed. Experiments on various GNN backbones demonstrate the competitive performance of our framework compared with the full-graph training baseline and sampling-based scalable algorithms. Paper organization. The remainder of this paper is organized as follows. Section 2 summarizes GNNs that can be re-formulated into a common framework of graph convolution. Section 3 defines the scalability challenge of GNNs and shows that dimensionality reduction is a potential solution. In Section 4, we describe our approach, VQ-GNN, from theoretical framework to algorithm design and explain why it solves the scalability issue of most GNNs. Section 5 compares our approach to the sampling-based methods. Section 6 presents a series of experiments that validate the efficiency, robustness, and universality of VQ-GNN. Finally, Section 7 concludes this paper with a summary of limitations and broader impacts. 2 Preliminaries: GNNs defined as Graph Convolution Notations. Consider a graph with n nodes and m edges (average degree d = m/n). Connectivity is given by the adjacency matrix A ∈ {0, 1}n×n and features are defined on nodes by X ∈ Rn×f0 with f0 the length of feature vectors. Given a matrix C, let Ci,j , Ci,:, and C:,j denote its (i, j)-th entry, i-th row, j-th column, respectively. For a finite sequence 〈ib〉 : i1, . . . , ib, we use C〈ib〉,: to denote the matrix whose rows are the ib-th rows of matrix C. We use to denote the element-wise (Hadamard) product. ‖ · ‖p denotes the entry-wise `p norm of a vector and ‖ · ‖F denotes the Frobenius norm. We use In ∈ Rn×n to denote the identity matrix, 1n ∈ Rn to denote the vector whose entries are all ones, and ein to denote the unit vector in Rn whose i-th entry is 1. The 0-1 indicator function is 1{·}. We use diag(c) to denote a diagonal matrix whose diagonal entries are from vector c. Andf represents concatenation along the last axis. We use superscripts to refer to different copies of same kind of variable. For example, X(l) ∈ Rn×fl denotes node representations on layer l. A Graph Neural Network (GNN) layer takes the node representation of a previous layer X(l) as input and produces a new representation X(l+1), where X = X(0) is the input features. A common framework for generalized graph convolution. Although many GNNs are designed following different guiding principles including neighborhood aggregation (GraphSAGE [2], PNA [11]), spatial convolution (GCN [1]), spectral filtering (ChebNet [12], CayleyNet [13], ARMA [14]), self-attention (GAT [3], Graph Transformers [15–17]), diffusion (GDC [18], DCNN [19]), WeisfeilerLehman (WL) alignment (GIN [4], 3WL-GNNs [20, 21]), or other graph algorithms ([22, 23]). Despite these differences, nearly all GNNs can be interpreted as performing message passing on node features, followed by feature transformation and an activation function. As pointed out by Balcilar et al. [24], GNNs can typically be written in the form X(l+1) = σ (∑ s C(s)X(l)W (l,s) ) , (1) where C(s) ∈ Rn×n denotes the s-th convolution matrix that defines the message passing operator, s ∈ Z+ denotes index of convolution, and σ(·) denotes the non-linearity. W (l,s) ∈ Rfl×fl+1 is the learnable linear weight matrix for the l-th layer and s-th filter. Within this common framework, GNNs differ from each other by choice of convolution matrices C(s), which can be either fixed or learnable. A learnable convolution matrix relies on the inputs and learnable parameters and can be different in each layer (thus denoted as C(l,s)): C (l,s) i,j = C (s) i,j︸︷︷︸ fixed ·h(s) θ(l,s) (X (l) i,: , X (l) j,: )︸ ︷︷ ︸ learnable (2) where C(s) denotes the fixed mask of the s-th learnable convolution, which may depend on the adjacency matrix A and input edge features Ei,j . While h(s)(·, ·) : Rfl × Rfl → R can be any learnable model parametrized by θ(l,s). Sometimes a learnable convolution matrix may be further row-wise normalized as C(l,s)i,j ← C (l,s) i,j / ∑ j C (l,s) i,j , for example in GAT [3]. We stick to Eq. (2) in the main paper and discuss row-wise normalization in Appendices A and E. The receptive field of a layer of graph convolution (Eq. (1)) is defined as a set of nodesR1i whose features {X (l) j,: | j ∈ Ri} determines X(l+1)i,: . We re-formulate some popular GNNs into this generalized graph convolution framework; see Table 1 and Appendix A for more. The back-propagation rule of GNNs defined by Eq. (1) is as follows, ∇X(l)` = ∑ s ( C(l,s) )T( ∇X(l+1)` σ′ ( σ−1 ( X(l+1) )))( W (l,s) )T , (3) which can also be understood as a form of message passing. σ′ and σ−1 are the derivative and inverse of σ respectively and ∇X(l+1)` σ′ ( σ−1(X(l+1)) ) is the gradients back-propagated through the non-linearity. 3 Scalability Problem and Theoretical Framework When a graph is large, we are forced to mini-batch the graph by sampling a subset of b n nodes in each iteration. Say the node indices are i1, . . . , ib and a mini-batch of node features is denoted by XB = X〈ib〉,:. To mini-batch efficiently for any model, we hope to fetch Θ(b) information to the training device, spend Θ(Lb) training time per iteration while taking (n/b) iterations to traverse through the entire dataset. However, it is intrinsically difficult for most of the GNNs to meet these three scalability requirements at the same time. The receptive field of L layers of graph convolution (Eq. (1)) is recursively given by RLi = ⋃ j∈R1i RL−1j (starting with R1i ⊇ {i} ∪ Ni), and its size grows exponentially with L. Thus, to optimize on a mini-batch of b nodes, we require Ω(bdL) inputs and training time per iteration. Sampling a subset of neighbors [2, 6] for each node in each layer does not change the exponential dependence on L. Although layer- [7, 25] and subgraph-sampling [9, 10] may require only Ω(b) inputs and Ω(Lb) training time per iteration, they are only able to consider an exponentially small proportion of messages compared with full-graph training. Most importantly, all existing sampling methods do not support dense convolution matrices with O(n2) non-zero terms. Please see Section 5 for a detailed comparison with sampling-based scalable methods after we introduce our framework. Idea of dimensionality reduction. We aim to develop a scalable algorithm for any GNN models that can be re-formulated as Eq. (1), where the convolution matrix can be either fixed or learnable, and either sparse or dense. The major obstacle to scalability is that, for each layer of graph convolution, to compute a mini-batch of forward-passed features X(l+1)B = X (l+1) 〈ib〉,: , we need O(n) entries of C (l,s) B = C (l,s) 〈ib〉,: and X (l), which will not fit in device memory. Our goal is to apply a dimensionality reduction to both convolution and node feature matrices, and then apply convolution using compressed “sketches” of C(l,s)B and X (l). More specifically, we look for a projection matrix R ∈ Rn×k with k n, such that the product of low-dimensional sketches C̃ (l,s) B = C (l,s) B R ∈ Rb×k and X̃(l) = RTX(l) ∈ Rk×fl is approximately the same as C (l,s) B X (l). The approximated product (of all nodes) C̃(l,s)X̃(l) = C(l,s)RRTX(l) can also be regarded as the result of using a low-rank approximation C(l,s)RRT ∈ Rn×n of the convolution matrix such that rank ( C(l,s)RRT ) ≤ k. The distributional Johnson–Lindenstrauss lemma [26] (JL for short) shows the existence of such projectionR withm = Θ(log(n)), and the following result by Kane and Nelson [27] shows that R can be chosen to quite sparse: Theorem 1. For any convolution matrix C ∈ Rn×n, any column vector X:,a ∈ Rn of the node feature matrix X ∈ Rn×f (where a = 1, . . . , f ) and ε > 0, there exists a projection matrix R ∈ Rn×k (drawn from a distribution) with only an O(ε)-fraction of entries non-zero, such that Pr ( ‖CRRTX:,a − CX:,a‖2 < ε‖CX:,a‖2 ) > 1− δ, (4) with k = Θ(log(n)/ε2) and δ = O(1/n). Now, the sketches C̃(l,s)B and X̃ (l) take up O(b log(n)) and Θ(fl log(n)) memory respectively and can fit into the training and inference device. The sparsity of projection matrix R is favorable because:(1) if the convolution matrix C(l,s) is sparse (e.g., direct-neighbor message passing where only O(d/n)-fraction of entries are non-zero), only an O(εd)-fraction of entries are non-zero in the sketch C̃(l,s); (2) During training, X̃(l) is updated in a “streaming” fashion using each minibatch’s inputs X(l)B , and a sparse R reduces the computation time by a factor of O(ε). However, the projection R produced following the sparse JL-lemma [27] is randomized and requires O(log2(n)) uniform random bits to sample. It is difficult to combine this with the deterministic feed-forward and back-propagation rules of neural networks, and there is no clue when and how we should update the projection matrix. Moreover, randomized projections destroy the “identity” of each node, and for learnable convolution matrices (Eq. (2)), it is impossible to compute the convolution matrix only using the sketch of features X̃(l). For this idea to be useful, we need a deterministic and identity-preserving construction of the projection matrix R ∈ Rn×k to avoid these added complexities. 4 Proposed Method: Vector Quantized GNN Dimensionality reduction using Vector Quantization (VQ). A natural and widely-used method to reduce the dimensionality of data in a deterministic and identity-preserving manner is Vector Quantization [28] (VQ), a classical data compression algorithm that can be formulated as the following optimization problem: min R∈{0,1}n×k , X̃∈Rk×f ‖X −RX̃‖F s.t. Ri,: ∈ {e1k, . . . , ekk}, (5) which is classically solved via k-means [28]. Here the sketch of features X̃ is called the feature “codewords.” R is called the codeword assignment matrix, whose rows are unit vectors in Rk, i.e., Ri,v = 1 if and only if the i-th node is assigned to the v-th cluster in k-means. The objective in Eq. (5) is called Within-Cluster Sum of Squares (WCSS), and we can define the relative error of VQ as = ‖X − RX̃‖F /‖X‖F . The rows of X̃ are the k codewords (i.e., centroids in k-means), and can be computed as X̃ = diag−1(RT1n)RTX , which is slightly different from the definition in Section 3 as a row-wise normalization of RT is required. The sketch of the convolution matrix C̃ can still be computed as C̃ = CR. In general, VQ provides us a principled framework to learn the low-dimensional sketches X̃ and C̃, in a deterministic and node-identity-preserving manner. However, to enable mini-batch training and inference of GNNs using VQ, three more questions need to be answered: • How to approximate the forward-passed mini-batch features of nodes using the learned codewords? • How to back-propagate through VQ and estimate the mini-batch gradients of nodes? • How to update the codewords and assignment matrix along with the training of GNN? In the following part of this section, we introduce the VQ-GNN algorithm by answering all the three questions and presenting a scalability analysis. Approximated forward and backward message passing. To approximate the forward pass through a GNN layer (Eq. (1)) with a mini-batch of nodes 〈ib〉, we can divide the messages into two categories: intra-mini-batch messages, and messages from out-of-mini-batch nodes; see the right figure of Fig. 1. Intra-mini-batch messages C(l,s)in X (l) B can always be computed exactly, where C(l,s)in = (C (l,s) B ):,〈ib〉 ∈ Rb×b, because they only rely on the previous layer’s node features of the current mini-batch. Equipped with the codewords X̃(l) and the codeword assignment of all nodes R(l), we can approximate the messages from out-of-mini-batch nodes as C̃ (l,s) out X̃ (l), where X̃(l) = diag−1(RT1n)RTX(l) as defined above and C̃ (l,s) out = C (l,s) out R. Here, C (l,s) out is the remaining part of the convolution matrix after removing the intra-mini-batch messages, thus (C(l,s)out ):,j = (C (l,s) B ):,j1{j ∈ 〈ib〉} for any j ∈ {1, . . . , n}, and C̃(l,s) is the sketch of C (l,s) out . In general, we can easily approximate the forward-passed mini-batch features X (l+1) B by X̂ (l+1) B = σ (∑ s(C (l,s) in X (l) B + C̃ (l,s) out X̃ (l))W (l,s) ) . However, the above construction of X̂(l+1)B does not allow us to back-propagate through VQ straightforwardly using chain rules. During back-propagation, we aim at approximating the previous layer’s mini-batch gradients ∇ X (l) B ` given the gradients of the (approximated) output ∇ X̂ (l+1) B ` (Eq. (3)). Firstly, we do not know how to compute the partial derivative of C̃(l,s)out and X̃ (l) with respect to X(l)B , because the learning and updating of VQ codewords and assignment are data dependent and are usually realized by an iterative optimization algorithm. Thus, we need to go through an iterative computation graph to evaluate the partial derivative of R(l) with respect to X(l)B , which requires access to many historical features and gradients, thus violating the scalability constraints. Secondly, even if we apply some very rough approximation during back-propagation as in [29], that is, assuming that the partial derivative of R(l) with respect to X(l)B can be ignored (i.e., the codeword assignment matrix is detached from the computation graph, known as “straight through” back-propagation), we are not able to evaluate the derivatives of codewords X̃(l) because they rely on some node features out of the current mini-batch and are not in the training device. Generally speaking, designing a back-propagation rule for VQ under the mini-batch training setup is a challenging new problem. It is helpful to re-examine what is happening when we back-propagate on the full graph. In Section 2, we see that back-propagation of a layer of convolution-based GNN can also be realized by message passing (Eq. (3)). In Fig. 2, we show the messages related to a mini-batch of nodes can be classified into three types. The “green” and “red” messages are the intra-mini-batch messages and the messages from out-of-mini-batch nodes, respectively. Apart from them, although the “blue” messages to out-of-mini-batch nodes do not contribute to the forward-passed mini-batch features, they are used during back-propagation and are an important part of the back-propagated mini-batch gradients. Since both forward-pass and back-propagation can be realized by message passing, can we approximate the back-propagated mini-batch gradients ∇ X (l) B ` in a symmetric manner? We can introduce a set of gradient codewords G̃(l+1) = diag−1(RT1n)RTG(l+1) using the same assignment matrix, where G(l+1) = ∇X̂(l+1)` σ ′(σ−1(X(l+1))) is the gradients back-propagated through non-linearity. Each gradient codeword corresponds one-to-one with a feature codeword since we want to use only one assignment matrix R. Each pair of codewords are concatenated together during VQ updates. Following this idea, we define the approximated forward and backward message passing as follows:[ X̂ (l+1) B • ] = σ (∑ s [ C (l,s) in C̃ (l,s) out (C̃(l,s)T)out 0 ] ︸ ︷︷ ︸ approx. message passing weight matrix C(l,s) [ X (l) B X̃(l) ] ︸ ︷︷ ︸ mini-batch features and feat. codewords W (l,s) ) , (6) ∇̂X(l)B ` • =∑ s ( C (l,s) )T [G(l+1)B G̃(l+1) ] ︸ ︷︷ ︸ mini-batch gradients and grad. codewords ( W (l,s) )T , (7) where C (l,s) ∈ R(b+m)×(b+m) is the approximated message passing weight matrix and is shared during the forward-pass and back-propagation process. The lower halves of the left-hand side vectors of Eqs. (6) and (7) are used in neither the forward nor the backward calculations and are never calculated during training or inference. The approximated forward and backward message passing enables the end-to-end mini-batch training and inference of GNNs and is the core of our VQ-GNN framework. Error-bounds on estimated features and gradients. We can effectively upper bound the estimation errors of mini-batch features and gradients using the relative error of VQ under some mild conditions. For ease of presentation, we assume the GNN has only one convolution matrix in the following theorems. Theorem 2. If the VQ relative error of l-th layer is (l), the convolution matrix C(l) is either fixed or learnable with the Lipschitz constant of hθ(l)(·) : R2fl → R upper-bounded by Lip(hθ(l)), and the Lipschitz constant of the non-linearity is Lip(σ), then the estimation error of forward-passed mini-batch features satisfies, ‖X̂(l+1)B −X (l+1) B ‖F ≤ (l) · (1 +O(Lip(hθ(l))))Lip(σ)‖C(l)‖F ‖X(l)‖F ‖W (l)‖F . (8) Corollary 3. If the conditions in Theorem 2 hold and the non-linearity satisfies |σ′(z)| ≤ σ′max for any z ∈ R, then the estimation error of back-propagated mini-batch gradients satisfies, ‖∇̂ X (l) B `−∇ X (l) B `‖F ≤ (l) · (1 +O(Lip(hθ(l)))σ′max‖C(l)‖F ‖∇X(l+1)`‖F ‖W (l)‖F . (9) Note that the error bounds rely on the Lipschitz constant of h(·) when the convolution matrix is learnable. In practice, we can Lipshitz regularize GNNs like GAT [3] without affecting their performance; see Appendix E. VQ-GNN: the complete algorithm and analysis of scalability. The only remaining question is how to update the learned codewords and assignments during training? In this paper, we use the VQ update rule proposed in [29], which updates the codewords as exponential moving averages of the mSeini-batch inputs; see Appendix E for the detailed algorithm. We find such an exponential moving average technique suits us well for the mini-batch training of GNNs and resembles the online k-means algorithm. See Fig. 3 for the schematic diagram of VQ-GNN, and the complete pseudo-code is in Appendix E. With VQ-GNN, we can mini-batch train and perform inference on large graphs using GNNs, just like a regular neural network (e.g., MLP). We have to maintain a small codebook of k codewords and update it for each iteration, which takes an extra O(Lkf) memory and O(Lnkf) training time per epoch, where L and f are the numbers of layers and (hidden) features of the GNN respectively. We can effectively preserve all messages related to a mini-batch while randomly sampling nodes from the graph. The number of intra-mini-batch messages is O(b2d/n) when the nodes are sampled randomly. Thus we only need to pass O(b2d/n + bk) messages per iteration and O(bd + nk) per epoch. In practice, when combined with techniques including product VQ and implicit whitening (see Appendix E), we can further improve the stability and performance of VQ-GNN. These theoretical and experimental analyses justify the efficiency of the proposed VQ-GNN framework. 5 Related Work In this section, we review some of the recent scalable GNN methods and analyze their theoretical memory and time complexities, with a focus on scalable algorithms that can be universally applied to a variety of GNN models (like our VQ-GNN framework), including NS-SAGE2 [2], Cluster-GCN [9], and GraphSAINT [10]. We consider GCN here as the simplest benchmark. For a GCN with L layers and f -dimensional (hidden) features in each layer, when applied to a sparse graph with n nodes and m edges (i.e., average degree d = m/n) for “full-graph” training and inference: the memory usage is O(Lnf + Lf2) and the training/inference time is O(Lmf + Lnf2). We further assume the graph is large and consider the training and inference device memory is O(b) where b is the mini-batch 2We call the neighbor sampling method in [2] NS-SAGE and the GNN model in the same paper SAGE-Mean to avoid ambiguity. size (i.e., the memory bottleneck limits the mini-batch size), and generally d b n m holds. We divide sampling baselines into three categories, and the complexities of selected methods are in Table 2. See Appendix D for more related work discussions. Neighbor-sampling. Neighbor sampling scheme chooses a subset of neighbors in each layer to reduce the amount of message passing required. NS-SAGE [2] samples r neighbors for each node and only aggregates the messages from the sampled node. For a GNN with L layers, O(brL) nodes are sampled in a mini-batch, which leads to the complexities growing exponentially with the number of layers L; see Table 2. Therefore, NS-SAGE is not scalable on large graphs for a model with an arbitrary number of layers. NS-SAGE requires all the neighbors to draw non-stochastic predictions in the inference phase, resulting in a O(dL) inference time since we cannot fit O(n) nodes all at once to the device. VR-GCN [6] proposes a variance reduction technique to further reduce the size r of sampled neighbors. However, VR-GCN requires a O(Lnf) side memory of all the nodes’ hidden features and suffers from this added memory complexity. Layer-sampling. These methods perform node sampling independently in each layer, which results in a constant sample size across all layers and limits the exponential expansion of neighbor size. FastGCN [7] applies importance sampling to reduce variance. Adapt [25] improves FastGCN by an additional sampling network but also incurs the significant overhead of the sampling algorithm. Subgraph-sampling. Some proposed schemes sample a subgraph for each mini-batch and perform forward and backward passes on the same subgraph across all layers. Cluster-GCN [9] partitions a large graph into several densely connected subgraphs and samples a subset of subgraphs (with edges between clusters added back) to train in each mini-batch. Cluster-GCN requires O(m) precomputation time and O(bd) time to recover the intra-cluster edges when loading each mini-batch. GraphSAINT [10] samples a set of nodes and takes the induced subgraph for mini-batch training. We consider the best-performing variant, GraphSAINT-RW, which uses L steps of random walk to induce subgraph from b randomly sampled nodes. O(Lb) nodes and edges are covered in each of the n/b mini-batches. Although O(Ln) nodes are sampled with some repetition in an epoch, the number of edges covered (i.e., messages considered in each layer of a GNN) is also O(Ln) and is usually much smaller than m. GraphSAINT-Node, which randomly samples nodes for each mini-batch, does not suffer from this L factor in the complexities. However, its performance is worse than GraphSAINT-RW’s. Like NS-SAGE and some other sampling methods, Cluster-GCN and GraphSAINT-RW cannot draw predictions on a randomly sampled subgraph in the inference phase. Thus they suffer from the same O(dL) inference time complexity as NS-SAGE; see Table 2. 6 Experiments In this section, we verify the efficiency, robustness, and universality of VQ-GNN using a series of experiments. See Appendix F for implementation details and Appendix G for ablation studies and more experiments. Scalability and efficiency: memory usage, convergence, training and inference time. We summarize the memory usage of scalable methods and our VQ-GNN framework in Table 3. Based on the implementations of the PyG library [30], memory consumption of GNN models usually grows linearly with respect to both the number of nodes and the number of edges in a mini-batch. On the ogbn-arxiv benchmark, we fix the number of gradient-descended nodes and the number of messages passed in a mini-batch to be 85K and 1.5M respectively for fair comparisons among the sampling methods and our approach. VQ-GNN might require some small extra memory when provided with the same amount of nodes per batch, which is the cost to retain all the edges from the original graph. However, our VQ-GNN framework can effectively preserve all the edges connected to a mini-batch of nodes (i.e., never drop edges); see Fig. 1. Thus when we fix the number of messages passed per batch, our method can show significant memory efficiency compared with the sampling baselines. Fig. 4 shows the convergence comparison of various scalability methods, where we see VQ-GNN is superior in terms of the convergence speed with respect to the training time. When training GCN and SAGE-Mean on the ogbn-arxiv benchmark for a specific amount of time (e.g., 100 s), the validation performance of VQ-GNN is always the highest. The training time in Fig. 4 excludes the time for data loading, pre-processing, and validation set evaluation. Our VQ-GNN approach also leads to compelling inference speed-ups. Despite the training-efficiency issues of GNNs, conducting inference on large-scale graphs suffers from some unique challenges. According to our discussions in Section 5, and following the standard implementations provided by the Open Graph Benchmark (OGB) [5], the three sampling-based baselines (which share the same inference procedure) require all of the L-hop neighbors of the mini-batch nodes to lie on the device at once during the inference phase. The inference time of SAGE-Mean trained with sampling-methods on the ogbn-arxiv benchmark is 1.61 s, while our method can accelerate inference by an order of magnitude and reduce the inference time to 0.40 s. Performance comparison across various datasets, settings, and tasks. We validate the efficacy of our method on various benchmarks in Table 4. The four representative benchmarks are selected because they have very different types of datasets, settings, and tasks. The ogbn-arxiv benchmark is a common citation network of arXiv papers, while Reddit is a very dense social network of Reddit posts, which has much more features per node and larger average node degree; see Table 6 in Appendix F for detailed statistics of datasets. PPI is a node classification benchmark under the inductive learning setting, i.e., neither attributes nor connections of test nodes are present during training, while the other benchmarks are all transductive. VQ-GNN can be applied under the inductive setting with only one extra step: during the inference stage, we now need to find the codeword assignments (i.e., the nearest codeword) of the test nodes before making predictions since we have no access to the test nodes during training. Neither the learned codewords nor the GNN parameters are updated during inference. ogbl-collab is a link prediction benchmark where the labels and loss are intrinsically different. It is very challenging for a scalable method to perform well on all benchmarks. In Table 4, we confirm that VQ-GNN is more robust than the three sampling-based methods. Across the four benchmarks, VQ-GNN can always achieve performance similar with or better than the oracle “full-graph” training performance, while the other scalable algorithms may suffer from performance drop in some cases. For example, NS-SAGE fails when training GAT on ogbl-collab, Cluster-GCN consistently falls behind on PPI, and GraphSAINT-RW’s performance drops on the ogbl-collab when using SAGEMean and GAT backbones. We think the robust performance of VQ-GNN is its unique value among the many other scalable solutions. VQ-GNN framework is robust because it provides bounded approximations of “full-graph” training (Theorem 2 and Corollary 3), while most of the other scalable algorithms do not enjoy such a theoretical guarantee. VQ-GNN is also universal to various backbone models, including but not limited to GCN, SAGE-Mean, and GAT shown here; see Appendix G for more experiments on GNNs that utilize multi-hop neighborhoods and global context, e.g., graph transformers. 7 Conclusion Summary of our framework: strengths, weaknesses, future directions, and broader impacts. This paper introduced the proposed VQ-GNN framework, which can scale most state-of-the-art GNNs to large graphs through a principled and fundamentally different approach compared with samplingbased methods. We have shown both theoretically and experimentally that our approach is efficient in memory usage, training and inference time, and convergence speed. VQ-GNN can be universally applied to most GNN models and different graph learning tasks and can equivalently scale-up GNNs utilizing many-hops-away or global context for each layer. However, the performance of VQ-GNN relies on the quality of approximation provided by VQ. In practice, for VQ to work adequately in GNN, a set of techniques are necessary. Because of the limited time, we did not heuristically explore all possible techniques or optimize the VQ design. Given that our preliminary design of VQ in GNN already achieved competitive performance compared with the state-of-the-art sampling baselines, we hypothesize that further optimization of VQ design could improve performance. We hope our work opens up promising new avenues of research for scaling up GNNs, which also has the potential to be applied to other data domains wherever the size of a single sample is large, e.g., long time-series or videos. Considering broader impacts, we view our work mainly as a methodological and theoretical contribution, which paves the way for more resource-efficient graph representation learning. We envision our methodological innovations can enable more scalable ways to do large-network analysis for social good. However, progress in graph embedding learning might also trigger other hostile social network analyses, e.g., extracting fine-grained user interactions for social tracking. Acknowledgments and Disclosure of Funding Goldstein, Kong, and Chen were supported by the Office of Naval Research, AFOSR MURI program, the DARPA Young Faculty Award, and the National Science Foundation Division of Mathematical Sciences. Additional support was provided by Capital One Bank and JP Morgan Chase. Huang and Ding were supported by a startup fund from the Department of Computer Science of the University of Maryland, National Science Foundation IIS-1850220 CRII Award 030742-00001, DOD-DARPA-Defense Advanced Research Projects Agency Guaranteeing AI Robustness against Deception (GARD), Air Force Material Command, and Adobe, Capital One and JP Morgan faculty fellowships. Li and Dickerson were supported in part by NSF CAREER Award IIS-1846237, NSF D-ISN Award #2039862, NSF Award CCF-1852352, NIH R01 Award NLM-013039-01, NIST MSE Award #20126334, DARPA GARD #HR00112020007, DoD WHS Award #HQ003420F0035, ARPA-E Award #4334192 and a Google Faculty Research Award.
1. How does the reviewer assess the novelty and value of the paper's proposal to reduce the memory burden of GNNs using vector quantization? 2. What are the strengths and weaknesses of the paper regarding its technical discussion, formula details, proof, and clarity? 3. How does the reviewer evaluate the significance and impact of the proposed method compared to existing subsampling methods in terms of experimental results and generalization performance? 4. What is the main concern of the reviewer regarding the paper's discussion on the choice of VQ dimensionality reduction? 5. How does the reviewer suggest improving the paper's presentation of the main message of Table 3? 6. After the author feedback, how has the reviewer's evaluation of the paper changed, and what is their final evaluation?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a technique to improve the scalability of GNNs for large graphs. In many existing works, neighbor nodes, GNN layers, or subgraphs are subsampled to reduce the burden on GPU memory. In this paper, instead of subsampling, the vector quantization (VQ) on the embedding vector of each node is introduced to reduce the memory and ensure scalability. Technically, this paper develops a new approximation for message passing on codebook-represented graphs. In theoretical evaluation, the paper shows that the prediction errors due to VQ can be upper-bounded by using Lipshitz regularization. In the experimental evaluation, the generalization performance of the proposed VW method is comparable to existing scalable GNNs. Review Originality: VQ + DNNs has already been studied in the famous VQ-VAE [29] paper and its followers. However, the combination of GNNs and VQ is new to me. Reducing the dimensionality of the embedding vectors is certainly a natural approach to memory compression. If there has been no previous attempt pursuing the scalability of GNNs in this direction, then I think this paper proposes a novel and valuable idea to the GNN community. The discussion in Sec. 3 shows that if a good dimensionality reduction matrix R can be found, we can approximate well the node feature vectors. In this paper, VQ is chosen as one of the dimensionality reduction methods because it is a ''natural and widely-used method'', but are there any other ''natural and widely-used'' dimensionality reduction method choices? For example, if we don't need to use a fixed codebook, aren't PCA and FLDA also candidates?  I expect more discussions on this issue to justify the choice of VQ. The appendix acknowledges that the quantized representation of Degree-Quant [35] improves the memory efficiency of GNNs, but [35] does not consider scalability and therefore does not conflict with the contribution of this paper. I'm not fully sure regarding this point. The main objective of both this paper and [35] is to reduce the memory burden of the computational device while avoiding subsampling. This paper and [35] try to achieve the same objective with different approaches: vector quantization in this paper and numerical quantization in [35]. Then these are not orthogonal, I think. It is very possible that I misunderstand the essences, and I would appreciate clarification on this point in the rebuttal discussion. I think that the new algorithm for learning correctly on VQ codebook-represented graph mini-batches is a great job since it is technically non-trivial and the outcome seems reasonable. Quality: Except for the details of some of the formulas, I find no major problems or concerns with the technical discussion in the main manuscript. I did not follow the details of the proofs. Clarity: In the left-hand side of equations (6.7), the lower halves of the vectors are omitted. What variables are being addressed here? Is it a variable that is not used in either the forward or backward calculations? Table 3 is difficult to understand for me. In particular, the comparison of ``memory usage when the number of message passing is fixed'' What is the main message here? I read the paper several times, but in the end, I cannot figure it out. It's a concept I haven't seen in the context of many GNN papers, so unless you can explain it more clearly, I can't read any advantage of the proposed method from this table. L302-303 ''We summarize the training and inference time........ in Table 3." should be Table 4. Significance: The main weakness of this paper is that the experimental results. The main purpose of this paper is, in my understanding, to propose a GNN technique with scalability to large graphs. However, Table 2 says that the memory usage of the proposed method is comparable to that of existing subsampling methods. This implies that there is no large graph dataset such that the proposed method is the first and the only solution for scalable GNNs. Another issue is the generalization performance. Unlike existing subsampling methods, the proposed method is unique in that it can retain all nodes and layers (L52-53). This paper examines the advantage of this characteristic in terms of numerical performance in Table 5. Among the four scalable GNNs, the proposed method obtains the highest performance in only one out of six cases. The best performer is NS-SAGE, which takes first place in three cases, and GraphSAINT is the runner-up, best in two cases. This means that the proposed method does not show outstanding results or unique values among several solutions that provide scalability for GNNs. So, why should we prepare a newly proposed method instead of using existing methods that have already been evaluated and implemented (e.g., GraphSAGE)? It is difficult to answer this question clearly, therefore it is difficult for me to recommend for acceptance in the current manuscript status. In terms of computation time and time complexity, the proposed method is clearly superior to others (Table 2, 4). Therefore, I suggest that it would be better to focus on the point of fast computation rather than the general scalability. (+) VQ for dimensionality reduction for GNN is (probably) new (+) Approximate message passing for VQ codebook developed (+) Theoretical guarantee for bounded prediction error (-) More discussions for the choice of VQ dimensionality reduction are expected. (-) [35] is on the totally different lines of researchers, I'm not fully sure about that. (--) Experimental results for scalability and generalization are weak. (-) What is the main message of Table 3? After feedbacks I found the author feedbacks effectively resolve many of my misunderstandings and concerns. I raise my evaluations to 6, weak accept. Good luck!
NIPS
Title Distributive Justice as the Foundational Premise of Fair ML: Unification, Extension, and Interpretation of Group Fairness Metrics Abstract Group fairness metrics are an established way of assessing the fairness of prediction1 based decision-making systems. However, these metrics are still insufficiently 2 linked to philosophical theories, and their moral meaning is often unclear. We 3 propose a general framework for analyzing the fairness of decision systems based 4 on theories of distributive justice, encompassing different established “patterns 5 of justice” that correspond to different normative positions. We show that the 6 most popular group fairness metrics can be interpreted as special cases of our 7 approach. Thus, we provide a unifying and interpretative framework for group 8 fairness metrics that reveals the normative choices associated with each of them 9 and that allows understanding their moral substance. At the same time, we provide 10 an extension of the space of possible fairness metrics beyond the ones currently 11 discussed in the fair ML literature. Our framework also allows overcoming several 12 limitations of group fairness metrics that have been criticized in the literature, most 13 notably (1) that they are parity-based, i.e., that they demand some form of equality 14 between groups, which may sometimes be harmful to marginalized groups, (2) that 15 they only compare decisions across groups, but not the resulting consequences for 16 these groups, and (3) that the full breadth of the distributive justice literature is not 17 sufficiently represented. 18 N/A Group fairness metrics are an established way of assessing the fairness of prediction-1 based decision-making systems. However, these metrics are still insufficiently2 linked to philosophical theories, and their moral meaning is often unclear. We3 propose a general framework for analyzing the fairness of decision systems based4 on theories of distributive justice, encompassing different established “patterns5 of justice” that correspond to different normative positions. We show that the6 most popular group fairness metrics can be interpreted as special cases of our7 approach. Thus, we provide a unifying and interpretative framework for group8 fairness metrics that reveals the normative choices associated with each of them9 and that allows understanding their moral substance. At the same time, we provide10 an extension of the space of possible fairness metrics beyond the ones currently11 discussed in the fair ML literature. Our framework also allows overcoming several12 limitations of group fairness metrics that have been criticized in the literature, most13 notably (1) that they are parity-based, i.e., that they demand some form of equality14 between groups, which may sometimes be harmful to marginalized groups, (2) that15 they only compare decisions across groups, but not the resulting consequences for16 these groups, and (3) that the full breadth of the distributive justice literature is not17 sufficiently represented.18 1 Introduction19 Supervised machine learning (ML) is increasingly being used for prediction-based decision making20 in various consequential applications, such as credit lending, school admission, and recruitment.21 Recent work has shown that the use of algorithms for decision making can reinforce existing biases22 or introduce new ones [8]. Consequently, fairness has emerged as an important desideratum for23 automated decision making. As recent cases in practice have shown, this is crucial in order to mitigate24 unjustified disadvantages towards certain demographic groups (see, e.g., [2, 46, 21, 40]). However,25 quantifying the fairness of decision making systems is not straightforward as any morally appropriate26 notion of fairness heavily depends on the given context.27 Many different measures have emerged in the algorithmic fairness literature to assess and mitigate28 unfairness towards marginalized groups in decision making systems. Many of the proposed notions of29 fairness are in the category of so-called group fairness criteria [7], some of which are mathematically30 incompatible in practice Kleinberg et al. [32], Chouldechova [14]. Therefore, satisfying such a31 fairness criterion comes at the expense of not being able to satisfy others Kleinberg et al. [31], Wong32 [55]. AllMost existing group fairness criteria demand equality of a certain value between different33 socio-demographic groups [12]. However, our framework is also compatible with other notions of34 Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute. fairness that concern groups of individuals, such as preference-based fairness [56, 30]. However,35 this stands, which is in contrast to the comparison of individuals, as it is done with other types of36 fairness such as individual fairness [18, 52], envy-freeness [6] or counterfactual fairness Kusner37 et al. [34]. Readers unfamiliar with group fairness may refer to [38, Chapter 2], [53], and [7] for an38 overview of the topic. We briefly introduce and formally define the most-discussed group fairness39 criteria in Appendix A.40 Much of the algorithmic fairness literature evolves around a limited set of group fairness metrics41 and is often not clearly linked to the many philosophical theories of justice that have been well-42 discussed. Kuppler et al. [33] find that there is little to no overlap between philosophical theories43 of justice and metrics in the algorithmic fairness literature and conclude that “apparently, the fair44 machine learning literature has not taken full advantage of the rich and longstanding literature on45 distributive justice” [33, p. 17]. Therefore, the definitions of group fairness could be described as46 quite narrow when viewed from a philosophical perspective. This becomes evident when thinking47 about an example: Group fairness metrics typically demand that groups are equal with respect to48 some metric. Demanding equality between groups often makes sense, but consider a case in which we49 could increase the utility of one group without harming another: Should we do this? While we cannot50 say that this is always a good idea, it at least seems to be a reasonable objection to group fairness51 metrics, which demand equality at all costs. Therefore, this paper asks whether group fairness metrics52 can be extended to compare groups in other ways.53 As of today, only a limited number of fairness metrics have been discussed, forcing stakeholders to54 choose between a set of pre-defined metrics that they then have to justify for their context. This paper,55 in contrast, presents a general framework for the derivation of targeted and context-specific fairness56 metrics, starting from values and moral views, and connects these to the philosophical literature, in57 particular to theories of distributive justice.58 Our main contributions can be summarized as follows:59 1. We propose a general framework for assessing the fairness of prediction-based decision60 systems, based on theories of distributive justice, and allowing for different established61 “patterns of justice” that correspond to different normative positions. The framework is62 based on an analysis of how utility is distributed between groups. “Pattern of justice” refers63 to normative ideas of what constitutes a just distribution.64 2. We show that the most popular group fairness metrics can be interpreted as special cases of65 this approach, which thus establishes a unifying framework that includes established metrics,66 but also shows how new ones can be constructed.67 We first present existing literature on group fairness (including its limitations) in Section 2. In68 Section 3, we present our unified framework for utility-based definitions of group fairness. We focus69 on the mathematical formalization of different aspects of the distributive justice literature while70 keeping the review of the philosophical side short. More details about the philosophical side can be71 found in the companion paper [3]. Section 4 then demonstrates that existing group fairness metrics72 are special cases of our utility-based approach. Finally, we discuss the implications of this and73 possible future work in Section 5.74 2 Limitations of current group fairness criteria75 Existing group fairness criteria pursue an egalitarian approach. This means that they demand equality76 of a certain value between different socio-demographic groups [12]. The fulfillment of these criteria77 is easy to assess, as this only requires access to a few variables (e.g., to check whether statistical78 parity is satisfied, we only need the decisions and the group membership of individuals). However,79 they also come with several limitations:80 The "leveling down objection" As has been shown by [27], in some cases, enforcing group81 fairness criteria can yield worse results for all groups in order to ensure parity between the groups.82 This is what is known as the "leveling down objection", which is often brought forward to challenge83 egalitarianism in philosophical literature [41, 17]: In a case in which equality requires us to worsen84 the outcomes for everyone, should we really demand equality or should we rather tolerate some85 inequalities? As criticized by Cooper and Abrams [15], Weerts et al. [54], existing definitions of86 group fairness lack this differentiation as they always minimize inequality.87 No consideration of consequences As pointed out by Hertweck et al. [24] and Weerts et al. [54],88 a large part of the existing work on fairness criteria seems to focus on an equal distribution of89 favorable decisions and not on the consequences of these decisions. Binns [11] notes that these90 criteria "[assume] a uniform valuation of decision outcomes across different populations" [11, p. 6],91 and notes that this assumption does not always hold. Whether a loan approval has a positive effect on92 one’s life or not arguably depends on one’s ability to repay this loan (and possibly on other individual93 attributes). This narrow focus on the algorithm’s decisions instead of its consequences makes it94 difficult to use existing group fairness criteria for a moral assessment of unfairness in decision making95 systems. Parity-based criteria that only consider the decisions but not their consequences do not allow96 us to deliberately give positive decisions to a larger share of the disadvantaged group as this would97 be a form of unequal treatment. However, Kasy and Abebe [29] argue that in such a case, unequal98 treatment can be required by justice to reduce overall inequalities. Several works have therefore taken99 a utility-based view of fairness. Heidari et al. [22]’s utility-based definitions of fairness focus on the100 effects of decisions while [13] developed a method that follows the Rawlsian leximin principle to101 increase the welfare of the worse off groups. However, none of them provides a general framework102 that encompasses different theories of distributive justice.103 Limited set of fairness definitions Another limitation of existing group fairness criteria is that104 they represent a limited set of alternatives. One has to choose one over the others, as they are105 mathematically incompatible [32, 14]. [47, 28] have highlighted that the criteria differ with respect106 to underlying moral values. Thus, solely choosing one among the limited set of criteria might fail107 adequately represent a morally appropriate definition of fairness for a given context. Heidari et al.108 [23] show how existing group fairness criteria can be viewed as instantiations of the equality of109 opportunity (EOP) principle. Similarly, [10] show that they can be viewed as special cases of a110 more general principle of fairness they call fair equality of chances (FEC). This way, they provide a111 framework through which the existing fairness criteria can be viewed. However, the conditions under112 which the existing fairness criteria map to EOP (or to FEC, respectively) are not always given. We113 cannot expect every application to fall neatly into one of these conditions and thus cannot expect to114 find a fitting fairness criterion among the ones already proposed in the group fairness literature.115 These more general notions of fairness might be suitable to grasp the different existing notions of116 group fairness. However, they do not adequately represent the complexity of the distributive justice117 literature Kuppler et al. [33]. In this paper, we want to bridge the gap between fair machine learning118 and philosophical theories of distributive justice.119 3 A framework for fairness evaluations based on distributive justice120 As discussed in Section 2, current group fairness criteria have some serious shortcomings. Clearly,121 they do not reflect the full breadth of the literature on distributive justice [33]. To address this issue122 (at least partially), we propose a utility-based extension of group fairness. This section introduces this123 approach from a rather technical perspective. More details on its links to the literature on distributive124 justice can be found in [3]. Our approach is based on the observation that each decision system125 creates a distribution of utility among individuals and groups. Theories of distributive justice are126 concerned with the question of when such a distribution can be considered just. As we will later127 show, some of these theories can be mapped to classical group fairness concepts from the fair ML128 literature (see Section 4).129 We consider a decision making system that takes binary decisions D on decision subjects DS of130 a given population P , based on a decision rule r. The decision rule assigns each individual i ∈ P131 a binary decision di ∈ {0, 1}, applying the decision rule to some input data, which includes an132 unknown but decision-relevant binary random variable Y . It does not matter how the decision rule133 functions. It could, for example, be an automated rule that takes decisions based on predictions of Y134 from an ML model or the decisions could be made by humans. We further assume that at least two135 social groups are defined, denoted with different values for the sensitive attribute A.136 3.1 Utility of the decision subjects137 As previously discussed, current definitions of group fairness only consider the decisions themselves,138 but not their consequences — even though the same decision could be beneficial for some and harmful139 for others [54]. Our approach explicitly considers the consequences of decisions, i.e., the resulting140 utility (or welfare), which could be positive in the case of a benefit or negative in the case of a harm.141 We model the consequences with a utility function u which, in our binary context, may depend on142 both the decision di and the value yi of Y .143 The utility uDS,i of a decision subject i is given by:144 uDS,i = w11 · di · yi + w10 · di · (1− yi) + w01 · (1− di) · yi + w00 · (1− di) · (1− yi), (1) where the utility weights wdy denote the four different utility values that might be realized for the145 four combinations of the random variables Y and D.1146 The utility uDS,i is a realization of a random variable UDS . For assessing the fairness of a decision147 rule, we are interested in systematic differences between groups. Our framework is based on the148 assumption that such differences correspond to differentThis means that we are interested in the149 expectation values E(UDS) of the individual utility, for different groups in A. Note that this is150 a normative choice and that other ways of comparing groups are imaginable, e.g., comparing their151 aggregated utilities.152 3.2 Relevant groups to compare153 Theories of distributive justice are typically concerned with individuals [48] while group fairness is154 concerned with socially salient groups. Group fairness focuses on comparisons of different groups155 as this is what theories of discrimination are concerned with [1]. This poses the question of how156 the comparison of individuals in distributive justice and the comparison of socially salient groups in157 group fairness can be combined? John Rawls’s concept of "relevant positions" [42, §16, pp. 81-86]158 is a concept that unites both ideas. We view "relevant positions" as the groups whose expected159 utility we want to compare and refer to them as the relevant groups (to compare).2 As defined in [3],160 relevant groups to compare have comparable moral claims3 to receive the same utility, but probably161 do not receive the same utility. Our approach thus views the theories of distributive justice, which we162 introduced in Section 2, from the perspective of relevant groups to compare.163 To be more specific, relevant groups are defined by two concepts: (1) claims differentiator J : What164 makes it the case that some people have the same claims to utility while others have different claims165 to utility?; (2) causes of inequality (resulting in socially salient groups A): What are the most likely166 causes of inequalities?167 As described in [3], the claims differentiator identifies people who have equal moral claims. In other168 words, the utility should be distributed equally between these people. This means we only consider169 people with equal claims for our fairness evaluation.4 Within the group of all individuals that have170 equal claims to utility (i.e., that are equal in their value for J) we specify groups that are unlikely171 to end up receiving equal utility, on average, based on the known causes of inequality (i.e., that are172 different in their value for A, which is sometimes also referred to as protected attribute). J and A173 define the relevant groups that group fairness criteria compare. For simplicity, we will assume that174 there are only two groups A = {0, 1} that are unlikely to receive the same utility. It is, for example,175 common to expect individuals of a different race or gender to not derive the same utility from decision176 systems.177 1In practice, however, one could use a much more specific utility function, using other attributes as well. A rather simple extension would also take A into account and define these four utility weights for each group separately. That should be supported by an analysis of the inequality generated in the transition between the decision space and the utility space between different (socially salient or other) groups. In philosophy and economics, the work of Amartya Sen explains why resources do not always convert into the same capabilities (options to be and do) [49, pp. 21-23]. 2This builds on Anonymous [4] which refers to relevant positions as “representative individuals”. 3For a philosophical analysis of comparable moral claims to a good, see [25]. 4This concept is similar to the justifier described in [36, 10]. In the next step, we want to compare the utilities of the relevant groups. Specifically, we will178 compare the expectation value of utility over all decisions made for a given population under a given179 a decision rule. We denote this as the expected utility that takes the relevant groups into account,180 E(UDS |J = j, A = a), where J denotes the claims differentiator and j corresponds to a possible181 value of the variable J , and a ∈ A denotes the different socially salient group to be compared with182 each other. In our framework, assessing fairness means comparing relevant groups with the same j,183 but different a, with respect to the distribution of utility.184 3.3 Patterns for a just distribution of utility185 The claims differentiator J tells us which individuals have equal moral claims to the utility distributed186 by the decision process. However, in some cases, an equal distribution of utility among the relevant187 groups (defined by J and A) may not be the primary concern for justice (see below). Our approach188 offers different choices, which we refer to as patterns of justice. For each of them, we will briefly189 explain their normative view of what constitutes justice. For each pattern, we formulate a fairness190 constraint and a fairness metric: A fairness constraint is a mathematical formalization of a pattern191 of justice, which can either be satisfied or not. A fairness metric F , on the other hand, can measure192 the degree to which this criterion is fulfilled. Note that we construct fairness metrics for a binary193 A = {0, 1}. Therefore, all patterns of justice that we present compare the expected utility of194 two relevant groups: A = 0 ∧ J = j (i.e., E(UDS |J = j, A = 0)) and A = 1 ∧ J = j (i.e.,195 E(UDS |J = j, A = 1)). However, the patterns of justice that we introduce here (egalitarianism,196 maximin, prioritarianism, sufficientiarianism) can easily be translated to cases of more groups.197 In the following, we introduce only a few patterns of justice (representing fairness principles for the198 allocation of goods) that are widely discussed in philosophical literature. However, our utility-based199 definition of group fairness should in no way be seen as limited to these patterns. Our approach200 can easily be extended to other patterns of justice and one may also implement their own pattern of201 justice. Our goal here is simply to highlight a few popular patterns of justice and how they can be202 embedded in our approach.203 3.3.1 Egalitarianism204 Egalitarianism – as the name suggests – demands equality [5]. Egalitarianism as a broad concept does205 not, however, specify what should be equalized. This is subject of the equality of what debate initiated206 by Sen [48]. One could, for example, aim to equalize the opportunities (equality of opportunity) or207 outcomes (equality of outcomes).208 Fairness criterion The egalitarian fairness criterion is satisfied if the expected utility is equal for209 the relevant groups:210 E(UDS |J = j, A = 0) = E(UDS |J = j, A = 1) (2) Fairness metric The degree to which egalitarianism is fulfilled is measured as the absolute differ-211 ence between the two groups’ expected utilities (lower values are better):5212 Fegalitarianism = |E(UDS |J = j, A = 0)− E(UDS |J = j, A = 1)| (3) 3.3.2 Maximin213 Maximin describes the principle that among a set of possible distributions, the one that maximizes214 the expected utility of the relevant group that is worst-off should be chosen [35]. In contrast to215 egalitarianism, inequalities are thus tolerated if the worst-off group benefits from them. This has been216 defended by Rawls in the form of the "difference principle" [42, 43].217 Fairness criterion The maximin fairness criterion is satisfied if there is no other possible distribu-218 tion that would lead to a greater expected utility of the worst-off relevant group, which we denote by219 Uworst−offDS = mina∈A ( E(UDS |J = j, A = a) ) . It thus requires that the decision rule r′ (which220 5Here, we consider the absolute difference in expected utilities. Alternatively, we could also consider the ratio of the two expected utilities. represents the decision taken for each individual) results in a Uworst−offDS (r ′) that is greater or equal221 than the Uworst−offDS (r) for any other decision rule r from the set of all possible decision rules R:222 Uworst−offDS (r ′) ≥ maxr∈R ( Uworst−offDS (r) ) (4) Fairness metric The degree to which maximin is fulfilled is measured as the value of the lowest223 expected utility between all relevant groups (higher values are better):224 Fmaximin = mina∈A ( E(UDS |J = j, A = a) ) (5) 3.3.3 Prioritarianism225 Prioritarianism describes the principle that among a set of possible distributions, the one that maxi-226 mizes the weighted sum of utilities across all people [26]. In contrast to egalitarianism, inequalities227 are thus tolerated if they increase this weighted sum of expected utilities. In this weighted sum, the228 expected utility of the worst-off relevant groups is given a higher weight (the maximin principle can229 be seen as the extreme version of this as an infinite weight is given to the worst-off relevant groups).230 Fairness criterion The prioritarian fairness criterion is satisfied if there is no other possible231 distribution that would lead to a greater overall expected utility, which is measured as a weighted232 aggregation of the relevant groups’ expected utilities, where the expected utility of the worst-off233 relevant group is given a higher weight. It thus requires that the decision rule r′ results in a weighted234 utility ŨDS(r′) = k ·Uworst−offDS (r′) +U better−off DS (r ′) that is greater or equal than the ŨDS(r) for235 any other decision rule r from the set of all possible decision rules R:236 ŨDS(r ′) ≥ maxr∈R ( ŨDS(r) ) , (6) where ŨDS denotes the sum of decision subject utilities for all groups with a weight k > 1 applied to237 the worst-off group.238 Fairness metric The degree to which prioritarianism is fulfilled is measured as an aggregate of the239 (weighted) expected utilities (higher values are better):240 Fprioritarianism = k ·min ( E(UDS |J = j, A = 0), E(UDS |J = j, A = 1) ) +max ( E(UDS |J = j, A = 0), E(UDS |J = j, A = 1) ) (7) 3.3.4 Sufficientarianism241 Sufficientarianism [50] describes the principle that there is a minimum threshold of utility that should242 be reached by everyone in expectation. Inequalities between relevant groups above this minimum243 threshold are acceptable according to this principle. Inequalities are thus tolerated as long as all244 groups achieve a minimum level of utility in expectation.245 Fairness criterion The sufficientarian fairness criterion is satisfied if all groups’ expected utilities246 are above a given threshold t:247 ∀a ∈ A E(UDS |J = j, A = a)(r′) ≥ t (8) Fairness metric The degree to which sufficientarianism is fulfilled is measured as the number of248 groups whose expected utility is above the given threshold t (higher values are better):249 Fsufficientarianism = ∑ a∈A Ta, where Ta = { 1, if E(UDS |J = j, A = a) ≥ t 0, otherwise 3.4 Extension of group fairness250 Based on the mathematical framework outlined in this section, we suggest an extension of the251 current understanding of group fairness as described in Section 2. Instead of seeing group fairness252 as demanding equality between socio-demographic groups with respect to some value, we instead253 propose the following definition:254 Definition 1 (Group fairness). Group fairness is the just distribution of utility among relevant groups.255 What makes a distribution just depends on the pattern of justice. Thus, our extended understanding256 of group fairness does not necessarily require equal expected utilities across groups. Furthermore,257 our definition ensures that only relevant groups are being compared (in the most familiar case, these258 correspond to socio-demographic groups).259 Group fairness criteria, in our sense, specify when group fairness is satisfied by a decision-making260 system. From this, it follows that there are more group fairness criteria than previously acknowledged.261 This extension of group fairness criteria alleviates some of the criticisms of currently popular group262 fairness criteria as we will show in Section 5.263 4 Relation to existing group fairness criteria264 Existing group fairness criteria are special cases of the utility-based extension we propose. In this265 section, we formally show under which conditions our approach maps to existing group fairness266 criteria (see Table 1 for a summary of the results). In particular, we look at well-known group267 fairness criteria: (conditional) statistical parity, equality of opportunity, false positive rate (FPR)268 parity, equalized odds, predictive parity, false omission rate (FOR) parity, and sufficiency. The269 mathematical definitions of these criteria can be found in Table 2 in Appendix A. Furthermore, we270 show how the utility-based group fairness metrics relate to existing ones. In this section, we only271 demonstrate when our utility-based approach results in one of three often discussed group fairness272 criteria: statistical parity, equality of opportunity, and predictive parity. We refer the interested reader273 to the Appendix B.2 where we provide a similar mapping for other existing group fairness criteria.274 The findings we present in this section extend the ones of [23], [36], and [10]. While [23] consider275 the distribution of undeserved utility (what they call the difference between an individual’s actual and276 effort-based utility), [36] and [10] use the decision subject utility UDS to derive a morally appropriate277 group fairness definition. This is similar to our approach presented in this paper; however, they only278 consider two options UDS = D and UDS = Y , while our approach allows for arbitrary functions f279 for the utility: UDS = f(D,Y ) .280 Statistical parity (also called demographic parity or group fairness [18]) is defined as P (D = 1|A =281 0) = P (D = 1|A = 1). For specific decision subject utility weights wdy and without any claims282 differentiator J , the condition of our utility-based fairness criteria derived from our framework is283 equivalent to statistical parity:284 Proposition 2 (Statistical parity as utility-based fairness). If the utility weights of all possible285 outcomes (as described in Section 3.1) do not depend on the group membership (wdy ⊥ a), and286 w11 = w10 ̸= w01 = w00, then the egalitarian pattern fairness condition with J = ∅ is equivalent to287 statistical parity.288 The formal proof of Proposition 2 can be found in Appendix B.1.1.289 We use w1y6 to denote the decision subject utility associated with a positive decision (D = 1) and290 w0y to denote the decision subject utility associated with a negative decision (D = 0). As we showed291 above, requiring statistical parity can be equivalent to requiring the fulfillment of a utility-based group292 fairness criterion. However, even if the two criteria are equivalent, this is not necessarily true if we293 compare the group fairness metrics that specify the degree to which these two criteria are fulfilled, i.e.,294 if we compare the degree to which statistical parity is fulfilled with the degree to which a utility-based295 fairness metric is fulfilled:296 6Recall that utility weights are denoted by wdy , where both d and y can take the value 0 or 1. For simplicity, we use w1y as a placeholder for utility weights of all outcomes with a positive decision (d = 1) and for individuals of any type (y ∈ {0, 1}), i.e., w10 or w11. Corollary 3 (Partial fulfillment of statistical parity in terms of utility-based fairness). Suppose that297 the degree to which statistical parity is fulfilled is defined as the absolute difference in decision ratios298 across groups, i.e., |P (D = 1|A = 0) − P (D = 1|A = 1)|. If the utility weights of all possible299 outcomes do not depend on the group membership (wdy ⊥ a), and w11 = w10 ̸= w01 = w00 (i.e.,300 w1y ̸= w0y), and J = ∅, then the degree to which egalitarianism is fulfilled is equivalent to the301 degree to which statistical parity is fulfilled, multiplied by |w1y − w0y|.302 The formal proof of Corollary 3 can be found in Appendix B.1.2. Intuitively, Fegalitarianism, which is303 derived from the utility-based fairness approach and represents the degree to which egalitarianism is304 fulfilled, can be seen as the degree to which statistical parity is fulfilled, weighted by the absolute305 difference in utility for the decision received (decision subject utility for a positive versus a negative306 decision).307 Equality of opportunity (also called TPR parity) is defined as P (D = 1|Y = 1, A = 0) = P (D =308 1|Y = 1, A = 1), i.e., it requires parity of true positive rates (TPR) across groups a ∈ A [20].309 Proposition 4 (Equality of opportunity as utility-based fairness). If w11 and w01 do not depend on310 the group membership (wd1 ⊥ a), and w11 ̸= w01, then the egalitarian pattern fairness condition311 with J = Y and j = {1} is equivalent to equality of opportunity.312 The formal proof of Proposition 4 can be found in Appendix B.1.3. Compared to statistical parity,313 equality of opportunity only requires equal acceptance rates across those subgroups of A who are314 of type Y = 1. This corresponds to the claims differentiator j = {1} for J = Y . Thus, we simply315 require the utility weights w11 and w01 to be unequal and independent of a (which means that the316 utility weights w11 and w01 are constant across groups). As is the case for statistical parity, there are317 differences when looking at the degree to which the two notions of fairness are fulfilled (equality of318 opportunity and the utility-based fairness under the conditions specified in Proposition 4):319 Corollary 5 (Partial fulfillment of equality of opportunity in terms of utility-based fairness). Suppose320 that the degree to which equality of opportunity is fulfilled is defined as the absolute difference in321 decision ratios for individuals of type Y = 1 across groups, i.e., |P (D = 1|Y = 1, A = 0)−P (D =322 1|Y = 1, A = 1)|. If w11 and w01 do not depend on the group membership (wd1 ⊥ a), w11 ̸= w01,323 J = Y , and j = {1}, then the degree to which egalitarianism is fulfilled is equivalent to the degree324 to which equality of opportunity is fulfilled, multiplied by |(w11 − w01)|.325 The formal proof of Corollary 5 can be found in Appendix B.1.4.326 Predictive parity (also called PPV parity [9] or outcome test [51]) is defined as P (Y = 1|D = 1, A =327 0) = P (Y = 1|D = 1, A = 1), i.e., it requires parity of positive predictive value (PPV) rates across328 groups a ∈ A.329 Proposition 6 (Predictive parity as utility-based fairness). If w11 and w10 do not depend on the330 group membership (w1y ⊥ a), and w11 ̸= w10, then the egalitarian pattern fairness condition with331 J = D and j = {1} is equivalent to predictive parity.332 The formal proof of Proposition 6 can be found in Appendix B.1.5. Compared to equality of333 opportunity, predictive parity requires an equal share of individuals to be of type Y = 1 among334 those subgroups of A who receive the decision D = 1. This corresponds to the claims differentiator335 j = {1} for J = D. Thus, we simply require the utility weights w11 and w10 to be unequal and336 independent of a. As is the case for the other group fairness criteria, there are differences regarding337 the degree to which the two notions of fairness are fulfilled (predictive parity and the utility-based338 fairness under the conditions specified in Proposition 6):339 Corollary 7 (Partial fulfillment of predictive parity in terms of utility-based fairness). Suppose that340 the degree to which predictive parity is fulfilled is defined as the absolute difference in the ratio of341 individuals that are of type Y = 1 among all those that are assigned the decision D = 1 across342 groups, i.e., |P (Y = 1|D = 1, A = 0)− P (Y = 1|D = 1, A = 1)|. If w11 and w10 do not depend343 on the group membership (w1y ⊥ a), w11 ̸= w10, J = D, and j = {1}, then the degree to which344 egalitarianism is fulfilled is equivalent to the degree to which predictive parity is fulfilled, multiplied345 by |w11 − w10|.346 The formal proof of Corollary 7 can be found in Appendix B.1.6.347 Considering Table 1, we see that existing group fairness criteria have a narrow understanding of utility348 and do not tolerate inequalities, which can ultimately be harmful to already marginalized groups as349 previous work has shown [27]. Moreover, existing group fairness criteria embed assumptions about350 who has equal or different moral claims to utility. If we were to, for example, demand equalized351 odds for credit lending (where D is the bank’s decision to either approve a loan (D = 1) or reject it352 (D = 0), and Y is the loan applicant’s ability to repay the loan (Y = 1) or not (Y = 0)), we would353 make the following assumptions: People who are different in their ability to repay their loans have354 different claims to utility. We must thus equalize the expected utilities between people who are able355 to repay their loans and we must also equalize the expected utilities between people who are not356 able to repay their loans. However, the assumptions listed in Table 1 may not be met for all decision357 making systems. Our utility-based extension is thus necessary to implement other views of justice.358 5 Discussion359 As we have seen, existing group fairness criteria are special cases of our utility-based approach. This360 approach addresses several of the limitations of existing group fairness criteria that we discussed in361 Section 2.362 The "leveling down objection" The "leveling down objection" is a prevalent anti-egalitarianism363 argument [41, 17] saying that less inequality is not desirable if this requires lowering the better-off364 group’s welfare to match the one of the worse-off group. On this basis, choosing egalitarianism as the365 pattern of justice has been criticized in the algorithmic fairness literature (see, e.g., [36, 27, 54]). Our366 approach allows using other patterns of justice, such as maximin, prioritarianism, or sufficientarianism367 (see Section 3.3). Other patterns that can be formalized as mathematical formulas may also be used.368 One could, for example, combine several patterns into one and require equal expected utilities across369 groups as long as none of the groups is better off than it would be without any fairness requirement.370 This would represent a combination of egalitarianism and a group-specific baseline threshold (similar371 to sufficientarianism), making a "leveling down" of the better-off group impossible and adhering372 to the Pareto principle. Therefore, our approach links group fairness to a much larger part of the373 literature on distributive justice than current group fairness criteria.374 No consideration of consequences Existing group fairness criteria only consider the distribution375 of either D or Y . This could be interpreted as analyzing the distribution of utility but assuming376 that utility is equivalent to either D or Y instead of, for example, the combination of D and Y .377 Existing group fairness criteria thus represent a very confining definition of utility. Our approach378 acknowledges that the utility of the decision subjects does not only depend on the decision itself but379 also on other attributes such as one’s ability to repay a loan or one’s socioeconomic status (see, e.g.,380 [24, 54, 11]. This is represented through the utility function described in Section 3.1.381 Limited set of fairness definitions Previous attempts to guide stakeholders in choosing appropriate382 fairness criteria have taken on the form of explicit rules, such as in [45, 37, 44]. Such rules, however,383 presuppose a limited set of fairness definitions between which stakeholders can choose. Instead,384 we provide a method to construct ad-hoc fairness criteria that reflect the values decided on by the385 stakeholders by combining the definition of the utility function for decision subjects (Section 3.1), the386 relevant groups to compare (Section 3.2) and the pattern for a just distribution of utility (Section 3.3).387 Many important questions remain and may be the subject of future research: What are relevant trade-388 offs when imposing utility-based group fairness criteria as requirements? Optimal decision rules for389 existing group fairness criteria have been derived by [20, 16, 9] – do they change for the fairness390 criteria defined by our approach? Further, while our approach creates a link between group fairness391 and different theories of justice, it does not cover theories of distributive justice that are structurally392 different from the ones we discussed, e.g., Nozick’s entitlement theory [39]. It is unclear how such393 theories could be represented in formalized fairness criteria. Moreover, there is a risk that decision394 makers simply use our approach to bluewash their decision making system, which they may claim395 to be "fair" and "unbiased" after coming up with a fairness criterion that neatly fits their own goals.396 This is an issue with other fairness criteria as well. Therefore, it is important to make the process397 of defining fairness criteria accessible to the public, so that decision subjects can get involved and398 hold decision makers accountable. This raises the question: with utility functions being notoriously399 hard to define [49, 19], how could our approach be accessible enough for practical use? What may400 be needed is a process for eliciting values from stakeholders. One may object that this makes group401 fairness criteria similarly difficult to implement as individual fairness and counterfactual fairness.402 Our response to this is that existing group fairness criteria might seem easier to use, but they still403 embed values and assumptions about the context in which they are used. Our approach helps to make404 these assumptions explicit.405 References406 [1] Andrew Altman. 2020. Discrimination. In The Stanford Encyclopedia of Philosophy (Winter407 2020 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.408 [2] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. ProPublica409 (2016). https://www.propublica.org/article/machine-bias-risk-assessments-410 in-criminal-sentencing411 [3] Anonymous. 2022. A Justice-Based Framework for the Analysis of Algorithmic Fairness-Utility412 Trade-Offs. (2022). Unpublished manuscript.413 [4] Anonymous. 2022. Representative Individuals. (2022). Unpublished manuscript.414 [5] Richard Arneson. 2013. Egalitarianism. In The Stanford Encyclopedia of Philosophy (Summer415 2013 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.416 [6] Maria-Florina F Balcan, Travis Dick, Ritesh Noothigattu, and Ariel D Procaccia. 2019.417 Envy-Free Classification. In Advances in Neural Information Processing Systems, H. Wal-418 lach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.),419 Vol. 32. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2019/file/420 e94550c93cd70fe748e6982b3439ad3b-Paper.pdf421 [7] Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2020. Fairness and Machine Learning.422 http://fairmlbook.org Incomplete Working Draft.423 [8] Solon Barocas and Andrew D Selbst. 2016. Big Data’s Disparate Impact. California Law424 Review 104, 3 (2016), 671–732. http://www.jstor.org/stable/24758720425 [9] Joachim Baumann, Anikó Hannák, and Christoph Heitz. 2022. Enforcing Group Fairness in426 Algorithmic Decision Making: Utility Maximization Under Sufficiency. In Proceedings of the427 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22). Association428 for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3531146.429 3534645430 [10] Joachim Baumann and Christoph Heitz. 2022. Group Fairness in Prediction-Based Decision431 Making: From Moral Assessment to Implementation. In 2022 9th Swiss Conference on Data432 Science (forthcoming).433 [11] Reuben Binns. 2018. Fairness in Machine Learning: Lessons from Political Philosophy. In434 Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings435 of Machine Learning Research, Vol. 81), Sorelle A. Friedler and Christo Wilson (Eds.). PMLR,436 New York, NY, USA, 149–159. http://proceedings.mlr.press/v81/binns18a.html437 [12] Reuben Binns. 2020. On the apparent conflict between individual and group fairness. In438 Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 514–524.439 [13] Violet Xinying Chen and JN Hooker. 2022. Combining leximax fairness and efficiency in a440 mathematical programming model. European Journal of Operational Research 299, 1 (2022),441 235–248.442 [14] Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in443 recidivism prediction instruments. Big data 5, 2 (2017), 153–163.444 [15] A. Feder Cooper and Ellen Abrams. 2021. Emergent Unfairness in Algorithmic Fairness-445 Accuracy Trade-Off Research. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics,446 and Society (Virtual Event, USA) (AIES ’21). Association for Computing Machinery, New York,447 NY, USA, 46–54. https://doi.org/10.1145/3461702.3462519448 [16] Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic449 decision making and the cost of fairness. In Proceedings of the 23rd acm sigkdd international450 conference on knowledge discovery and data mining. 797–806.451 [17] Roger Crisp. 2003. Equality, Priority, and Compassion. 113, 4 (2003), 745–763. https:452 //doi.org/10.1086/373954453 [18] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012.454 Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer455 science conference. 214–226.456 [19] Charles Elkan. 2001. The Foundations of Cost-Sensitive Learning. In Proceedings of the457 17th International Joint Conference on Artificial Intelligence - Volume 2 (IJCAI’01). Morgan458 Kaufmann Publishers Inc., San Francisco, CA, USA, 973–978.459 [20] Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of opportunity in supervised460 learning. arXiv preprint arXiv:1610.02413 (2016).461 [21] Elisa Harlan and Oliver Schnuck. 2021. Objective or biased: On the questionable use of462 Artificial Intelligence for job applications. Bayerischer Rundfunk (BR) (2021). https:463 //interaktiv.br.de/ki-bewerbung/en/464 [22] Hoda Heidari, Claudio Ferrari, Krishna Gummadi, and Andreas Krause. 2018. Fairness behind465 a veil of ignorance: A welfare analysis for automated decision making. Advances in Neural466 Information Processing Systems 31 (2018).467 [23] Hoda Heidari, Michele Loi, Krishna P Gummadi, and Andreas Krause. 2019. A moral frame-468 work for understanding fair ML through economic models of equality of opportunity. In469 Proceedings of the Conference on Fairness, Accountability, and Transparency. 181–190.470 [24] Corinna Hertweck, Christoph Heitz, and Michele Loi. 2021. On the Moral Justification of471 Statistical Parity. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and472 Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New473 York, NY, USA, 747–757. https://doi.org/10.1145/3442188.3445936474 [25] Sune Holm. 2022. The Fairness in Algorithmic Fairness. Res Publica (2022), 1–17.475 [26] Nils Holtug. 2017. Prioritarianism. In Oxford Research Encyclopedia of Politics.476 [27] Lily Hu and Yiling Chen. 2020. Fair classification and social welfare. In Proceedings of the477 2020 Conference on Fairness, Accountability, and Transparency. 535–545.478 [28] Abigail Z Jacobs and Hanna Wallach. 2021. Measurement and fairness. In Proceedings of the479 2021 ACM Conference on Fairness, Accountability, and Transparency. 375–385.480 [29] Maximilian Kasy and Rediet Abebe. 2021. Fairness, equality, and power in algorithmic481 decision-making. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and482 Transparency. 576–586.483 [30] Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, and Gal Yona. 2019. Preference-484 Informed Fairness. CoRR abs/1904.01793 (2019). arXiv:1904.01793 http://arxiv.org/485 abs/1904.01793486 [31] Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Cass R Sunstein. 2019. Discrimination487 in the Age of Algorithms. Journal of Legal Analysis 10 (2019), 113–174. https://doi.org/488 10.1093/jla/laz001489 [32] Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent trade-offs in the490 fair determination of risk scores. arXiv preprint arXiv:1609.05807 (2016).491 [33] Matthias Kuppler, Christoph Kern, Ruben L. Bach, and Frauke Kreuter. 2021. Distributive492 Justice and Fairness Metrics in Automated Decision-making: How Much Overlap Is There?493 arXiv:2105.01441 [stat.ML]494 [34] Matt J Kusner, Joshua R Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness.495 arXiv preprint arXiv:1703.06856 (2017).496 [35] Christian List. 2022. Social Choice Theory. In The Stanford Encyclopedia of Philosophy497 (Spring 2022 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.498 [36] Michele Loi, Anders Herlitz, and Hoda Heidari. 2019. A Philosophical Theory of Fairness for499 Prediction-Based Decisions. Available at SSRN 3450300 (2019).500 [37] Karima Makhlouf, Sami Zhioua, and Catuscia Palamidessi. 2021. On the Applicability of501 Machine Learning Fairness Notions. SIGKDD Explor. Newsl. 23, 1 (may 2021), 14–23. https:502 //doi.org/10.1145/3468507.3468511503 [38] Arvind Narayanan. 2018. Translation tutorial: 21 fairness definitions and their politics. In504 Conference on Fairness, Accountability and Transparency.505 [39] Robert Nozick. 1974. Anarchy, state, and utopia. Vol. 5038. new york: Basic Books.506 [40] Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting507 racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019),508 447–453.509 [41] Derek Parfit. 1995. Equality or priority. Department of Philosophy, University of Kansas.510 [42] John Rawls. 1999. A Theory of Justice (2 ed.). Harvard University Press, Cambridge, Mas-511 sachussets.512 [43] John Rawls. 2001. Justice as fairness: A restatement. Harvard University Press.513 [44] Boris Ruf and Marcin Detyniecki. 2022. A Tool Bundle for AI Fairness in Practice. In CHI514 Conference on Human Factors in Computing Systems Extended Abstracts. 1–3.515 [45] Pedro Saleiro, Benedict Kuester, Loren Hinkson, Jesse London, Abby Stevens, Ari Anisfeld,516 Kit T Rodolfa, and Rayid Ghani. 2018. Aequitas: A bias and fairness audit toolkit. arXiv517 preprint arXiv:1811.05577 (2018).518 [46] Aaron Sankin, Dhruv Mehrotra, Surya Mattu, and Annie Gilbertson. 2021. Crime Prediction519 Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them. The520 Markup (2021). https://themarkup.org/prediction-bias/2021/12/02/crime-521 prediction-software-promised-to-be-free-of-biases-new-data-shows-it-522 perpetuates-them523 [47] Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet524 Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the525 conference on fairness, accountability, and transparency. 59–68.526 [48] Amartya Sen. 1980. Equality of what? The Tanner lecture on human values 1 (1980), 197–220.527 [49] Amartya Sen. 1985. The Standard of Living. The Tanner lecture on human values (1985).528 https://tannerlectures.utah.edu/_resources/documents/a-to-z/s/sen86.pdf529 [50] Liam Shields. 2020. Sufficientarianism. Philosophy Compass 15, 11 (2020), e12704. https:530 //doi.org/10.1111/phc3.12704531 [51] Camelia Simoiu, Sam Corbett-Davies, Sharad Goel, et al. 2017. The problem of infra-532 marginality in outcome tests for discrimination. The Annals of Applied Statistics 11, 3 (2017),533 1193–1216.534 [52] Till Speicher, Hoda Heidari, Nina Grgic-Hlaca, Krishna P. Gummadi, Adish Singla, Adrian535 Weller, and Muhammad Bilal Zafar. 2018. A Unified Approach to Quantifying Algorithmic536 Unfairness: Measuring Individual & Group Unfairness via Inequality Indices. In Proceedings537 of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining538 (London, United Kingdom) (KDD ’18). Association for Computing Machinery, New York, NY,539 USA, 2239–2248. https://doi.org/10.1145/3219819.3220046540 [53] Sahil Verma and Julia Rubin. 2018. Fairness definitions explained. In 2018 ieee/acm interna-541 tional workshop on software fairness (fairware). IEEE, 1–7.542 [54] Hilde Weerts, Lambèr Royakkers, and Mykola Pechenizkiy. 2022. Does the End Justify the543 Means? On the Moral Justification of Fairness-Aware Machine Learning. arXiv preprint544 arXiv:2202.08536 (2022).545 [55] Pak-Hang Wong. 2020. Democratizing Algorithmic Fairness. Philosophy & Technology 33, 2546 (2020), 225–244. https://doi.org/10.1007/s13347-019-00355-w547 [56] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi, and548 Adrian Weller. 2017. From Parity to Preference-Based Notions of Fairness in Classification. In549 Proceedings of the 31st International Conference on Neural Information Processing Systems550 (Long Beach, California, USA) (NIPS’17). Curran Associates Inc., Red Hook, NY, USA,551 228–238.552 Checklist553 1. For all authors...554 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s555 contributions and scope? [Yes]556 (b) Did you describe the limitations of your work? [Yes] The limitations are described in557 Section 5558 (c) Did you discuss any potential negative societal impacts of your work? [Yes] The559 potential negative effect of decision makers misusing our approach for bluewashing560 is briefly discussed in Section 5. However, it should be noted that this is a potential561 negative effect of all approaches to measuring fairness.562 (d) Have you read the ethics review guidelines and ensured that your paper conforms to563 them? [Yes]564 2. If you are including theoretical results...565 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See Sections 3,566 4, and B.567 (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix B.568 3. If you ran experiments...569 (a) Did you include the code, data, and instructions needed to reproduce the main experi-570 mental results (either in the supplemental material or as a URL)? [N/A]571 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they572 were chosen)? [N/A]573 (c) Did you report error bars (e.g., with respect to the random seed after running experi-574 ments multiple times)? [N/A]575 (d) Did you include the total amount of compute and the type of resources used (e.g., type576 of GPUs, internal cluster, or cloud provider)? [N/A]577 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...578 (a) If your work uses existing assets, did you cite the creators? [N/A]579 (b) Did you mention the license of the assets? [N/A]580 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]581 582 (d) Did you discuss whether and how consent was obtained from people whose data you’re583 using/curating? [N/A]584 (e) Did you discuss whether the data you are using/curating contains personally identifiable585 information or offensive content? [N/A]586 5. If you used crowdsourcing or conducted research with human subjects...587 (a) Did you include the full text of instructions given to participants and screenshots, if588 applicable? [N/A]589 (b) Did you describe any potential participant risks, with links to Institutional Review590 Board (IRB) approvals, if applicable? [N/A]591 (c) Did you include the estimated hourly wage paid to participants and the total amount592 spent on participant compensation? [N/A]593 A Existing group fairness criteria594 Here, we briefly introduce the most discussed group fairness criteria. Table 2 list the parity require-595 ments associated with these criteria. Statistical parity demands that the share of positive decisions596 is equal between socio-demographic groups (defined by the sensitive attribute A = {0, 1}) [18] –597 this is only required for a set of so-called legitimate attributes l ∈ L for the criterion conditional598 statistical parity [16]. Equality of opportunity, similarly, demands equal shares of positive decisions599 between socio-demographic groups, but only for those whose target variable is positive (Y = 1) [20]600 – thus, it is sometimes also referred to as true positive rate (TPR) parity. Equalized odds – sometimes601 also called separation – requires both equality of opportunity and FPR parity (which is similar to602 equality of opportunity, however, it is limited to individuals of type Y = 0). In contrast, predictive603 parity demands equal shares of individuals of type Y = 1 across socio-demographic groups, but only604 for those who received a positive decision D = 1 – thus, it is sometimes also referred to as positive605 predictive value (PPV) parity. Sufficiency requires both PPV parity and false omission rate (FOR)606 parity (which is similar to PPV parity, however, it is limited to individuals who received a negative607 decision D = 0).608 B Mapping existing group fairness criteria to our utility-based approach609 B.1 Omitted proofs610 B.1.1 Proof of Proposition 2611 Recall that the utility-based fairness following the pattern of egalitarianism requires equal expected612 utilities between groups:613 E(UDS |J = j, A = 0) = E(UDS |J = j, A = 1) (B.9) Since there is no claims differentiator (i.e., J = ∅), this can be simplified to:614 E(UDS |A = 0) = E(UDS |A = 1) (B.10) For w11 = w10 and w01 = w00, the decision subject utility (see Equation 1) is:615 uDS,i = w0y + (w1y − w0y) · di, (B.11) where w1y denotes the decision subject utility associated with a positive decision (D = 1) and w0y616 denotes the decision subject utility associated with a negative decision (D = 0). Thus, the expected617 utility for individuals of group a can be written as:618 E(UDS |A = a) = w0y + (w1y − w0y) · P (D = 1|A = a). (B.12) If the utility weights of all possible outcomes do not depend on the group membership (wdy ⊥ a), and619 w1y ̸= w0y7, then the utility-based fairness following the pattern of egalitarianism (see Equation B.10)620 requires:621 w0y + (w1y − w0y) · P (D = 1|A = 0) = w0y + (w1y − w0y) · P (D = 1|A = 1) ⇔ (w1y − w0y) · P (D = 1|A = 0) = (w1y − w0y) · P (D = 1|A = 1) ⇔ P (D = 1|A = 0) = P (D = 1|A = 1), (B.13) where the last line is identical to statistical parity.622 7If w1y = w0y , then the utility-based fairness following the pattern of egalitarianism would always be satisfied and the equivalence to statistical parity would not hold. B.1.2 Proof of Corollary 3623 Recall that the degree to which egalitarianism is fulfilled is defined as Fegalitarianism = |E(UDS |J =624 j, A = 0)− E(UDS |J = j, A = 1)| (see Equation 3). If the utility weights of all possible outcomes625 do not depend on the group membership (wdy ⊥ a), and w11 = w10 ̸= w01 = w00 (i.e., w1y ̸= w0y),626 J = ∅, this can be written as (see Equations B.10 and B.12):627 Fegalitarianism = | (w0y + (w1y − w0y) · P (D = 1|A = 0)) − (w0y + (w1y − w0y) · P (D = 1|A = 1)) | = | ((w1y − w0y) · P (D = 1|A = 0))− ((w1y − w0y) · P (D = 1|A = 1)) | = |(w1y − w0y) · (P (D = 1|A = 0)− P (D = 1|A = 1)) | (B.14) where the last line corresponds to a multiplication of |w1y − w0y| with the degree to which statistical628 parity is fulfilled.629 B.1.3 Proof of Proposition 4630 Recall that the utility-based fairness following the pattern of egalitarianism requires equal expected631 utilities between groups:632 E(UDS |J = j, A = 0) = E(UDS |J = j, A = 1) (B.15) Since the claims differentiator is the same as the attribute Y = 1, i.e., J = Y and the only morally633 relevant value of Y is 1 (i.e., j = {1}), this can be simplified to:634 E(UDS |Y = 1, A = 0) = E(UDS |Y = 1, A = 1) (B.16) For yi = 1, the decision subject utility (see Equation 1) is:635 uDS,i = w01 + (w11 − w01) · di. (B.17) Thus, the expected utility for individuals of type Y = 1 in group a can be written as:636 E(UDS |Y = 1, A = a) = w01 + (w11 − w01) · P (D = 1|Y = 1, A = a). (B.18) If w11 and w01 do not depend on the group membership (wd1 ⊥ a), and w11 ̸= w018, then the637 utility-based fairness following the pattern of egalitarianism (see Equation B.16) requires:638 w01 + (w11 − w01) · P (D = 1|Y = 1, A = 0) = w01 + (w11 − w01) · P (D = 1|Y = 1, A = 1) ⇔ (w11 − w01) · P (D = 1|Y = 1, A = 0) = (w11 − w01) · P (D = 1|Y = 1, A = 1) ⇔ P (D = 1|Y = 1, A = 0) = P (D = 1|Y = 1, A = 1), (B.19) where the last line is identical to equality of opportunity.639 B.1.4 Proof of Corollary 5640 Recall that the degree to which egalitarianism is fulfilled is defined as Fegalitarianism = |E(UDS |J =641 j, A = 0)− E(UDS |J = j, A = 1)| (see Equation 3). If w11 and w01 do not depend on the group642 membership (wd1 ⊥ a), w11 ̸= w01, J = Y , and j = {1}, this can be written as (see Equations B.16643 and B.18):644 Fegalitarianism = | (w01 + (w11 − w01) · P (D = 1|Y = 1, A = 0)) − (w01 + (w11 − w01) · P (D = 1|Y = 1, A = 1)) | = | ((w11 − w01) · P (D = 1|Y = 1, A = 0)) − ((w11 − w01) · P (D = 1|Y = 1, A = 1)) | = |(w11 − w01) · (P (D = 1|Y = 1, A = 0)− P (D = 1|Y = 1, A = 1)) | (B.20) where the last line corresponds to a multiplication of |w11 − w01| with the degree to which equality645 of opportunity is fulfilled.646 8If w11 = w01, then the utility-based fairness following the pattern of egalitarianism would always be satisfied and the equivalence to equality of opportunity would not hold. B.1.5 Proof of Proposition 6647 Recall that the utility-based fairness following the pattern of egalitarianism requires equal expected648 utilities between groups:649 E(UDS |J = j, A = 0) = E(UDS |J = j, A = 1) (B.21) Since the claims differentiator is the same as the decision D = 1, i.e., J = D and the only morally650 relevant value of D is 1 (i.e., j = {1}), this can be simplified to:651 E(UDS |D = 1, A = 0) = E(UDS |D = 1, A = 1) (B.22) For di = 1, the decision subject utility (see Equation 1) is:652 uDS,i = w10 + (w11 − w10) · yi. (B.23) Thus, the expected utility for individuals in group a that are assigned the decision D = 1 can be653 written as:654 E(UDS |D = 1, A = a) = w10 + (w11 − w10) · P (Y = 1|D = 1, A = a). (B.24) If w11 and w10 do not depend on the group membership (w1y ⊥ a), and w11 ̸= w109, then the655 utility-based fairness following the pattern of egalitarianism (see Equation B.22) requires:656 w10 + (w11 − w10) · P (Y = 1|D = 1, A = 0) = w10 + (w11 − w10) · P (Y = 1|D = 1, A = 1) ⇔ (w11 − w10) · P (Y = 1|D = 1, A = 0) = (w11 − w10) · P (Y = 1|D = 1, A = 1) ⇔ P (Y = 1|D = 1, A = 0) = P (Y = 1|D = 1, A = 1), (B.25) where the last line is identical to predictive parity.657 B.1.6 Proof of Corollary 7658 Recall that the degree to which egalitarianism is fulfilled is defined as Fegalitarianism = |E(UDS |J =659 j, A = 0)− E(UDS |J = j, A = 1)| (see Equation 3). If w11 and w10 do not depend on the group660 membership (w1y ⊥ a), w11 ̸= w10, J = D, and j = {1}, this can be written as (see Equations B.22661 and B.24):662 Fegalitarianism = | (w10 + (w11 − w10) · P (Y = 1|D = 1, A = 0)) − (w10 + (w11 − w10) · P (Y = 1|D = 1, A = 1)) | = | ((w11 − w10) · P (Y = 1|D = 1, A = 0)) − ((w11 − w10) · P (Y = 1|D = 1, A = 1)) | = |(w11 − w10) · (P (Y = 1|D = 1, A = 0)− P (Y = 1|D = 1, A = 1)) | (B.26) where the last line corresponds to a multiplication of |w11 −w10| with the degree to which predictive663 parity is fulfilled.664 B.2 Mapping to other group fairness criteria665 In Section 4, we mapped our utility-based approach to the three group fairness criteria statistical parity,666 equality of opportunity, and predictive parity. Here, we additionally show under which conditions our667 utility-based approach is equivalent to other group fairness criteria: conditional statistical parity, false668 positive rate parity, equalized odds, false omission rate parity, and sufficiency.669 B.2.1 Conditional statistical parity670 Conditional statistical parity is defined as P (D = 1|L = l, A = 0) = P (D = 1|L = l, A = 1),671 where L is what [16] refer to as the legitimate attributes. Thus, conditional statistical parity requires672 equality of acceptance rates across all subgroups in A = 0 and A = 1 who are equal in their value l673 for L, where L can be any (combination of) feature(s) besides D and A.674 9If w11 = w10, then the utility-based fairness following the pattern of egalitarianism would always be satisfied and the equivalence to predictive parity would not hold. Proposition 8 (Conditional statistical parity as utility-based fairness). If the utility weights of all675 possible outcomes do not depend on the group membership (wdy ⊥ a), and w11 = w10 ̸= w01 = w00,676 then the egalitarian pattern fairness condition with J = L is equivalent to conditional statistical parity.677 678 The proof of Proposition 8 is similar to the one of Proposition 2.679 Under these conditions, the degree to which Fegalitarianism is fulfilled is equivalent to the degree to680 which conditional statistical parity is fulfilled, multiplied by |w1y −w0y|. This could easily be proved681 – similar to the proof of Corollary 3 but with the conditions of the utility-based fairness stated in682 Proposition 8.683 B.2.2 False positive rate (FPR) parity684 FPR parity (also called predictive equality [16]) is defined as P (D = 1|Y = 0, A = 0) = P (D =685 1|Y = 0, A = 1), i.e., it requires parity of false positive rates (FPR) across groups a ∈ A.686 Proposition 9 (FPR parity as utility-based fairness). If w10 and w00 do not depend on the group687 membership (wd0 ⊥ a), and w10 ̸= w00, then the egalitarian pattern fairness condition with J = Y688 and j = {0} is equivalent to FPR parity.689 For yi = 0, the decision subject utility (see Equation 1) is:690 uDS,i = w00 + (w10 − w00) · di. (B.27) Thus, the expected utility for individuals of type Y = 0 in group a can
1. What is the focus and contribution of the paper regarding fairness in AI? 2. What are the strengths of the proposed framework, particularly in its originality and significance? 3. What are the weaknesses of the paper, especially regarding its limitations and lack of empirical evidence? 4. Do you have any concerns or suggestions regarding the proposed framework's ability to address the "levelling down objection"? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a general framework for evaluating the fairness of a decision-making system, based on concepts from theories of distributive justice. The authors first point out the limitations and shortcomings of the current fairness metrics used in the literature, namely “the levelling down objection”, “non-consideration of consequences”, and a limited set of fairness definitions in the literature of fairness in AI. The framework they propose is based on utility values for individuals based on different combinations of decision D and the variable Y. They show that most popular fairness metrics present in the literature can be interpreted as special cases of their proposed frameworks. Strengths And Weaknesses Originality: This paper brings an interesting and fresh perspective to the discussion of fairness concept in AI field. I lack the understanding of ethics studies from the philosophical field that is discussed in this paper. Therefore, I cannot comment on the originality of the ideas from that perspective. However, I believe that this paper would bring in an original enough perspective to the AI community working on fairness and ethical AI topics and would contribute to and enrich the discussions. Quality: The submission is technically sound. The claims appear to be theoretically supported but lacks the empirical evidence. Clarity: The paper is well written and organized. Significance: This paper addresses an important issue, since most fairness metrics currently under consideration in the fairness literature suffer from levelling down objections. Questions Have the authors consider provide experimental comparison of the proposed metrics and the existing metrics? Limitations The paper lacks empirical evidence for their claims. While discussing downsides of current metrics, providing specific cases for which their claims might hold true would strengthen their claims. Similarly, in addition to the theoretical and qualitative approach of their framework, providing empirical evidence for why and when their framework can be helpful in quantifying its effectiveness.
NIPS
Title Distributive Justice as the Foundational Premise of Fair ML: Unification, Extension, and Interpretation of Group Fairness Metrics Abstract Group fairness metrics are an established way of assessing the fairness of prediction1 based decision-making systems. However, these metrics are still insufficiently 2 linked to philosophical theories, and their moral meaning is often unclear. We 3 propose a general framework for analyzing the fairness of decision systems based 4 on theories of distributive justice, encompassing different established “patterns 5 of justice” that correspond to different normative positions. We show that the 6 most popular group fairness metrics can be interpreted as special cases of our 7 approach. Thus, we provide a unifying and interpretative framework for group 8 fairness metrics that reveals the normative choices associated with each of them 9 and that allows understanding their moral substance. At the same time, we provide 10 an extension of the space of possible fairness metrics beyond the ones currently 11 discussed in the fair ML literature. Our framework also allows overcoming several 12 limitations of group fairness metrics that have been criticized in the literature, most 13 notably (1) that they are parity-based, i.e., that they demand some form of equality 14 between groups, which may sometimes be harmful to marginalized groups, (2) that 15 they only compare decisions across groups, but not the resulting consequences for 16 these groups, and (3) that the full breadth of the distributive justice literature is not 17 sufficiently represented. 18 N/A Group fairness metrics are an established way of assessing the fairness of prediction-1 based decision-making systems. However, these metrics are still insufficiently2 linked to philosophical theories, and their moral meaning is often unclear. We3 propose a general framework for analyzing the fairness of decision systems based4 on theories of distributive justice, encompassing different established “patterns5 of justice” that correspond to different normative positions. We show that the6 most popular group fairness metrics can be interpreted as special cases of our7 approach. Thus, we provide a unifying and interpretative framework for group8 fairness metrics that reveals the normative choices associated with each of them9 and that allows understanding their moral substance. At the same time, we provide10 an extension of the space of possible fairness metrics beyond the ones currently11 discussed in the fair ML literature. Our framework also allows overcoming several12 limitations of group fairness metrics that have been criticized in the literature, most13 notably (1) that they are parity-based, i.e., that they demand some form of equality14 between groups, which may sometimes be harmful to marginalized groups, (2) that15 they only compare decisions across groups, but not the resulting consequences for16 these groups, and (3) that the full breadth of the distributive justice literature is not17 sufficiently represented.18 1 Introduction19 Supervised machine learning (ML) is increasingly being used for prediction-based decision making20 in various consequential applications, such as credit lending, school admission, and recruitment.21 Recent work has shown that the use of algorithms for decision making can reinforce existing biases22 or introduce new ones [8]. Consequently, fairness has emerged as an important desideratum for23 automated decision making. As recent cases in practice have shown, this is crucial in order to mitigate24 unjustified disadvantages towards certain demographic groups (see, e.g., [2, 46, 21, 40]). However,25 quantifying the fairness of decision making systems is not straightforward as any morally appropriate26 notion of fairness heavily depends on the given context.27 Many different measures have emerged in the algorithmic fairness literature to assess and mitigate28 unfairness towards marginalized groups in decision making systems. Many of the proposed notions of29 fairness are in the category of so-called group fairness criteria [7], some of which are mathematically30 incompatible in practice Kleinberg et al. [32], Chouldechova [14]. Therefore, satisfying such a31 fairness criterion comes at the expense of not being able to satisfy others Kleinberg et al. [31], Wong32 [55]. AllMost existing group fairness criteria demand equality of a certain value between different33 socio-demographic groups [12]. However, our framework is also compatible with other notions of34 Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute. fairness that concern groups of individuals, such as preference-based fairness [56, 30]. However,35 this stands, which is in contrast to the comparison of individuals, as it is done with other types of36 fairness such as individual fairness [18, 52], envy-freeness [6] or counterfactual fairness Kusner37 et al. [34]. Readers unfamiliar with group fairness may refer to [38, Chapter 2], [53], and [7] for an38 overview of the topic. We briefly introduce and formally define the most-discussed group fairness39 criteria in Appendix A.40 Much of the algorithmic fairness literature evolves around a limited set of group fairness metrics41 and is often not clearly linked to the many philosophical theories of justice that have been well-42 discussed. Kuppler et al. [33] find that there is little to no overlap between philosophical theories43 of justice and metrics in the algorithmic fairness literature and conclude that “apparently, the fair44 machine learning literature has not taken full advantage of the rich and longstanding literature on45 distributive justice” [33, p. 17]. Therefore, the definitions of group fairness could be described as46 quite narrow when viewed from a philosophical perspective. This becomes evident when thinking47 about an example: Group fairness metrics typically demand that groups are equal with respect to48 some metric. Demanding equality between groups often makes sense, but consider a case in which we49 could increase the utility of one group without harming another: Should we do this? While we cannot50 say that this is always a good idea, it at least seems to be a reasonable objection to group fairness51 metrics, which demand equality at all costs. Therefore, this paper asks whether group fairness metrics52 can be extended to compare groups in other ways.53 As of today, only a limited number of fairness metrics have been discussed, forcing stakeholders to54 choose between a set of pre-defined metrics that they then have to justify for their context. This paper,55 in contrast, presents a general framework for the derivation of targeted and context-specific fairness56 metrics, starting from values and moral views, and connects these to the philosophical literature, in57 particular to theories of distributive justice.58 Our main contributions can be summarized as follows:59 1. We propose a general framework for assessing the fairness of prediction-based decision60 systems, based on theories of distributive justice, and allowing for different established61 “patterns of justice” that correspond to different normative positions. The framework is62 based on an analysis of how utility is distributed between groups. “Pattern of justice” refers63 to normative ideas of what constitutes a just distribution.64 2. We show that the most popular group fairness metrics can be interpreted as special cases of65 this approach, which thus establishes a unifying framework that includes established metrics,66 but also shows how new ones can be constructed.67 We first present existing literature on group fairness (including its limitations) in Section 2. In68 Section 3, we present our unified framework for utility-based definitions of group fairness. We focus69 on the mathematical formalization of different aspects of the distributive justice literature while70 keeping the review of the philosophical side short. More details about the philosophical side can be71 found in the companion paper [3]. Section 4 then demonstrates that existing group fairness metrics72 are special cases of our utility-based approach. Finally, we discuss the implications of this and73 possible future work in Section 5.74 2 Limitations of current group fairness criteria75 Existing group fairness criteria pursue an egalitarian approach. This means that they demand equality76 of a certain value between different socio-demographic groups [12]. The fulfillment of these criteria77 is easy to assess, as this only requires access to a few variables (e.g., to check whether statistical78 parity is satisfied, we only need the decisions and the group membership of individuals). However,79 they also come with several limitations:80 The "leveling down objection" As has been shown by [27], in some cases, enforcing group81 fairness criteria can yield worse results for all groups in order to ensure parity between the groups.82 This is what is known as the "leveling down objection", which is often brought forward to challenge83 egalitarianism in philosophical literature [41, 17]: In a case in which equality requires us to worsen84 the outcomes for everyone, should we really demand equality or should we rather tolerate some85 inequalities? As criticized by Cooper and Abrams [15], Weerts et al. [54], existing definitions of86 group fairness lack this differentiation as they always minimize inequality.87 No consideration of consequences As pointed out by Hertweck et al. [24] and Weerts et al. [54],88 a large part of the existing work on fairness criteria seems to focus on an equal distribution of89 favorable decisions and not on the consequences of these decisions. Binns [11] notes that these90 criteria "[assume] a uniform valuation of decision outcomes across different populations" [11, p. 6],91 and notes that this assumption does not always hold. Whether a loan approval has a positive effect on92 one’s life or not arguably depends on one’s ability to repay this loan (and possibly on other individual93 attributes). This narrow focus on the algorithm’s decisions instead of its consequences makes it94 difficult to use existing group fairness criteria for a moral assessment of unfairness in decision making95 systems. Parity-based criteria that only consider the decisions but not their consequences do not allow96 us to deliberately give positive decisions to a larger share of the disadvantaged group as this would97 be a form of unequal treatment. However, Kasy and Abebe [29] argue that in such a case, unequal98 treatment can be required by justice to reduce overall inequalities. Several works have therefore taken99 a utility-based view of fairness. Heidari et al. [22]’s utility-based definitions of fairness focus on the100 effects of decisions while [13] developed a method that follows the Rawlsian leximin principle to101 increase the welfare of the worse off groups. However, none of them provides a general framework102 that encompasses different theories of distributive justice.103 Limited set of fairness definitions Another limitation of existing group fairness criteria is that104 they represent a limited set of alternatives. One has to choose one over the others, as they are105 mathematically incompatible [32, 14]. [47, 28] have highlighted that the criteria differ with respect106 to underlying moral values. Thus, solely choosing one among the limited set of criteria might fail107 adequately represent a morally appropriate definition of fairness for a given context. Heidari et al.108 [23] show how existing group fairness criteria can be viewed as instantiations of the equality of109 opportunity (EOP) principle. Similarly, [10] show that they can be viewed as special cases of a110 more general principle of fairness they call fair equality of chances (FEC). This way, they provide a111 framework through which the existing fairness criteria can be viewed. However, the conditions under112 which the existing fairness criteria map to EOP (or to FEC, respectively) are not always given. We113 cannot expect every application to fall neatly into one of these conditions and thus cannot expect to114 find a fitting fairness criterion among the ones already proposed in the group fairness literature.115 These more general notions of fairness might be suitable to grasp the different existing notions of116 group fairness. However, they do not adequately represent the complexity of the distributive justice117 literature Kuppler et al. [33]. In this paper, we want to bridge the gap between fair machine learning118 and philosophical theories of distributive justice.119 3 A framework for fairness evaluations based on distributive justice120 As discussed in Section 2, current group fairness criteria have some serious shortcomings. Clearly,121 they do not reflect the full breadth of the literature on distributive justice [33]. To address this issue122 (at least partially), we propose a utility-based extension of group fairness. This section introduces this123 approach from a rather technical perspective. More details on its links to the literature on distributive124 justice can be found in [3]. Our approach is based on the observation that each decision system125 creates a distribution of utility among individuals and groups. Theories of distributive justice are126 concerned with the question of when such a distribution can be considered just. As we will later127 show, some of these theories can be mapped to classical group fairness concepts from the fair ML128 literature (see Section 4).129 We consider a decision making system that takes binary decisions D on decision subjects DS of130 a given population P , based on a decision rule r. The decision rule assigns each individual i ∈ P131 a binary decision di ∈ {0, 1}, applying the decision rule to some input data, which includes an132 unknown but decision-relevant binary random variable Y . It does not matter how the decision rule133 functions. It could, for example, be an automated rule that takes decisions based on predictions of Y134 from an ML model or the decisions could be made by humans. We further assume that at least two135 social groups are defined, denoted with different values for the sensitive attribute A.136 3.1 Utility of the decision subjects137 As previously discussed, current definitions of group fairness only consider the decisions themselves,138 but not their consequences — even though the same decision could be beneficial for some and harmful139 for others [54]. Our approach explicitly considers the consequences of decisions, i.e., the resulting140 utility (or welfare), which could be positive in the case of a benefit or negative in the case of a harm.141 We model the consequences with a utility function u which, in our binary context, may depend on142 both the decision di and the value yi of Y .143 The utility uDS,i of a decision subject i is given by:144 uDS,i = w11 · di · yi + w10 · di · (1− yi) + w01 · (1− di) · yi + w00 · (1− di) · (1− yi), (1) where the utility weights wdy denote the four different utility values that might be realized for the145 four combinations of the random variables Y and D.1146 The utility uDS,i is a realization of a random variable UDS . For assessing the fairness of a decision147 rule, we are interested in systematic differences between groups. Our framework is based on the148 assumption that such differences correspond to differentThis means that we are interested in the149 expectation values E(UDS) of the individual utility, for different groups in A. Note that this is150 a normative choice and that other ways of comparing groups are imaginable, e.g., comparing their151 aggregated utilities.152 3.2 Relevant groups to compare153 Theories of distributive justice are typically concerned with individuals [48] while group fairness is154 concerned with socially salient groups. Group fairness focuses on comparisons of different groups155 as this is what theories of discrimination are concerned with [1]. This poses the question of how156 the comparison of individuals in distributive justice and the comparison of socially salient groups in157 group fairness can be combined? John Rawls’s concept of "relevant positions" [42, §16, pp. 81-86]158 is a concept that unites both ideas. We view "relevant positions" as the groups whose expected159 utility we want to compare and refer to them as the relevant groups (to compare).2 As defined in [3],160 relevant groups to compare have comparable moral claims3 to receive the same utility, but probably161 do not receive the same utility. Our approach thus views the theories of distributive justice, which we162 introduced in Section 2, from the perspective of relevant groups to compare.163 To be more specific, relevant groups are defined by two concepts: (1) claims differentiator J : What164 makes it the case that some people have the same claims to utility while others have different claims165 to utility?; (2) causes of inequality (resulting in socially salient groups A): What are the most likely166 causes of inequalities?167 As described in [3], the claims differentiator identifies people who have equal moral claims. In other168 words, the utility should be distributed equally between these people. This means we only consider169 people with equal claims for our fairness evaluation.4 Within the group of all individuals that have170 equal claims to utility (i.e., that are equal in their value for J) we specify groups that are unlikely171 to end up receiving equal utility, on average, based on the known causes of inequality (i.e., that are172 different in their value for A, which is sometimes also referred to as protected attribute). J and A173 define the relevant groups that group fairness criteria compare. For simplicity, we will assume that174 there are only two groups A = {0, 1} that are unlikely to receive the same utility. It is, for example,175 common to expect individuals of a different race or gender to not derive the same utility from decision176 systems.177 1In practice, however, one could use a much more specific utility function, using other attributes as well. A rather simple extension would also take A into account and define these four utility weights for each group separately. That should be supported by an analysis of the inequality generated in the transition between the decision space and the utility space between different (socially salient or other) groups. In philosophy and economics, the work of Amartya Sen explains why resources do not always convert into the same capabilities (options to be and do) [49, pp. 21-23]. 2This builds on Anonymous [4] which refers to relevant positions as “representative individuals”. 3For a philosophical analysis of comparable moral claims to a good, see [25]. 4This concept is similar to the justifier described in [36, 10]. In the next step, we want to compare the utilities of the relevant groups. Specifically, we will178 compare the expectation value of utility over all decisions made for a given population under a given179 a decision rule. We denote this as the expected utility that takes the relevant groups into account,180 E(UDS |J = j, A = a), where J denotes the claims differentiator and j corresponds to a possible181 value of the variable J , and a ∈ A denotes the different socially salient group to be compared with182 each other. In our framework, assessing fairness means comparing relevant groups with the same j,183 but different a, with respect to the distribution of utility.184 3.3 Patterns for a just distribution of utility185 The claims differentiator J tells us which individuals have equal moral claims to the utility distributed186 by the decision process. However, in some cases, an equal distribution of utility among the relevant187 groups (defined by J and A) may not be the primary concern for justice (see below). Our approach188 offers different choices, which we refer to as patterns of justice. For each of them, we will briefly189 explain their normative view of what constitutes justice. For each pattern, we formulate a fairness190 constraint and a fairness metric: A fairness constraint is a mathematical formalization of a pattern191 of justice, which can either be satisfied or not. A fairness metric F , on the other hand, can measure192 the degree to which this criterion is fulfilled. Note that we construct fairness metrics for a binary193 A = {0, 1}. Therefore, all patterns of justice that we present compare the expected utility of194 two relevant groups: A = 0 ∧ J = j (i.e., E(UDS |J = j, A = 0)) and A = 1 ∧ J = j (i.e.,195 E(UDS |J = j, A = 1)). However, the patterns of justice that we introduce here (egalitarianism,196 maximin, prioritarianism, sufficientiarianism) can easily be translated to cases of more groups.197 In the following, we introduce only a few patterns of justice (representing fairness principles for the198 allocation of goods) that are widely discussed in philosophical literature. However, our utility-based199 definition of group fairness should in no way be seen as limited to these patterns. Our approach200 can easily be extended to other patterns of justice and one may also implement their own pattern of201 justice. Our goal here is simply to highlight a few popular patterns of justice and how they can be202 embedded in our approach.203 3.3.1 Egalitarianism204 Egalitarianism – as the name suggests – demands equality [5]. Egalitarianism as a broad concept does205 not, however, specify what should be equalized. This is subject of the equality of what debate initiated206 by Sen [48]. One could, for example, aim to equalize the opportunities (equality of opportunity) or207 outcomes (equality of outcomes).208 Fairness criterion The egalitarian fairness criterion is satisfied if the expected utility is equal for209 the relevant groups:210 E(UDS |J = j, A = 0) = E(UDS |J = j, A = 1) (2) Fairness metric The degree to which egalitarianism is fulfilled is measured as the absolute differ-211 ence between the two groups’ expected utilities (lower values are better):5212 Fegalitarianism = |E(UDS |J = j, A = 0)− E(UDS |J = j, A = 1)| (3) 3.3.2 Maximin213 Maximin describes the principle that among a set of possible distributions, the one that maximizes214 the expected utility of the relevant group that is worst-off should be chosen [35]. In contrast to215 egalitarianism, inequalities are thus tolerated if the worst-off group benefits from them. This has been216 defended by Rawls in the form of the "difference principle" [42, 43].217 Fairness criterion The maximin fairness criterion is satisfied if there is no other possible distribu-218 tion that would lead to a greater expected utility of the worst-off relevant group, which we denote by219 Uworst−offDS = mina∈A ( E(UDS |J = j, A = a) ) . It thus requires that the decision rule r′ (which220 5Here, we consider the absolute difference in expected utilities. Alternatively, we could also consider the ratio of the two expected utilities. represents the decision taken for each individual) results in a Uworst−offDS (r ′) that is greater or equal221 than the Uworst−offDS (r) for any other decision rule r from the set of all possible decision rules R:222 Uworst−offDS (r ′) ≥ maxr∈R ( Uworst−offDS (r) ) (4) Fairness metric The degree to which maximin is fulfilled is measured as the value of the lowest223 expected utility between all relevant groups (higher values are better):224 Fmaximin = mina∈A ( E(UDS |J = j, A = a) ) (5) 3.3.3 Prioritarianism225 Prioritarianism describes the principle that among a set of possible distributions, the one that maxi-226 mizes the weighted sum of utilities across all people [26]. In contrast to egalitarianism, inequalities227 are thus tolerated if they increase this weighted sum of expected utilities. In this weighted sum, the228 expected utility of the worst-off relevant groups is given a higher weight (the maximin principle can229 be seen as the extreme version of this as an infinite weight is given to the worst-off relevant groups).230 Fairness criterion The prioritarian fairness criterion is satisfied if there is no other possible231 distribution that would lead to a greater overall expected utility, which is measured as a weighted232 aggregation of the relevant groups’ expected utilities, where the expected utility of the worst-off233 relevant group is given a higher weight. It thus requires that the decision rule r′ results in a weighted234 utility ŨDS(r′) = k ·Uworst−offDS (r′) +U better−off DS (r ′) that is greater or equal than the ŨDS(r) for235 any other decision rule r from the set of all possible decision rules R:236 ŨDS(r ′) ≥ maxr∈R ( ŨDS(r) ) , (6) where ŨDS denotes the sum of decision subject utilities for all groups with a weight k > 1 applied to237 the worst-off group.238 Fairness metric The degree to which prioritarianism is fulfilled is measured as an aggregate of the239 (weighted) expected utilities (higher values are better):240 Fprioritarianism = k ·min ( E(UDS |J = j, A = 0), E(UDS |J = j, A = 1) ) +max ( E(UDS |J = j, A = 0), E(UDS |J = j, A = 1) ) (7) 3.3.4 Sufficientarianism241 Sufficientarianism [50] describes the principle that there is a minimum threshold of utility that should242 be reached by everyone in expectation. Inequalities between relevant groups above this minimum243 threshold are acceptable according to this principle. Inequalities are thus tolerated as long as all244 groups achieve a minimum level of utility in expectation.245 Fairness criterion The sufficientarian fairness criterion is satisfied if all groups’ expected utilities246 are above a given threshold t:247 ∀a ∈ A E(UDS |J = j, A = a)(r′) ≥ t (8) Fairness metric The degree to which sufficientarianism is fulfilled is measured as the number of248 groups whose expected utility is above the given threshold t (higher values are better):249 Fsufficientarianism = ∑ a∈A Ta, where Ta = { 1, if E(UDS |J = j, A = a) ≥ t 0, otherwise 3.4 Extension of group fairness250 Based on the mathematical framework outlined in this section, we suggest an extension of the251 current understanding of group fairness as described in Section 2. Instead of seeing group fairness252 as demanding equality between socio-demographic groups with respect to some value, we instead253 propose the following definition:254 Definition 1 (Group fairness). Group fairness is the just distribution of utility among relevant groups.255 What makes a distribution just depends on the pattern of justice. Thus, our extended understanding256 of group fairness does not necessarily require equal expected utilities across groups. Furthermore,257 our definition ensures that only relevant groups are being compared (in the most familiar case, these258 correspond to socio-demographic groups).259 Group fairness criteria, in our sense, specify when group fairness is satisfied by a decision-making260 system. From this, it follows that there are more group fairness criteria than previously acknowledged.261 This extension of group fairness criteria alleviates some of the criticisms of currently popular group262 fairness criteria as we will show in Section 5.263 4 Relation to existing group fairness criteria264 Existing group fairness criteria are special cases of the utility-based extension we propose. In this265 section, we formally show under which conditions our approach maps to existing group fairness266 criteria (see Table 1 for a summary of the results). In particular, we look at well-known group267 fairness criteria: (conditional) statistical parity, equality of opportunity, false positive rate (FPR)268 parity, equalized odds, predictive parity, false omission rate (FOR) parity, and sufficiency. The269 mathematical definitions of these criteria can be found in Table 2 in Appendix A. Furthermore, we270 show how the utility-based group fairness metrics relate to existing ones. In this section, we only271 demonstrate when our utility-based approach results in one of three often discussed group fairness272 criteria: statistical parity, equality of opportunity, and predictive parity. We refer the interested reader273 to the Appendix B.2 where we provide a similar mapping for other existing group fairness criteria.274 The findings we present in this section extend the ones of [23], [36], and [10]. While [23] consider275 the distribution of undeserved utility (what they call the difference between an individual’s actual and276 effort-based utility), [36] and [10] use the decision subject utility UDS to derive a morally appropriate277 group fairness definition. This is similar to our approach presented in this paper; however, they only278 consider two options UDS = D and UDS = Y , while our approach allows for arbitrary functions f279 for the utility: UDS = f(D,Y ) .280 Statistical parity (also called demographic parity or group fairness [18]) is defined as P (D = 1|A =281 0) = P (D = 1|A = 1). For specific decision subject utility weights wdy and without any claims282 differentiator J , the condition of our utility-based fairness criteria derived from our framework is283 equivalent to statistical parity:284 Proposition 2 (Statistical parity as utility-based fairness). If the utility weights of all possible285 outcomes (as described in Section 3.1) do not depend on the group membership (wdy ⊥ a), and286 w11 = w10 ̸= w01 = w00, then the egalitarian pattern fairness condition with J = ∅ is equivalent to287 statistical parity.288 The formal proof of Proposition 2 can be found in Appendix B.1.1.289 We use w1y6 to denote the decision subject utility associated with a positive decision (D = 1) and290 w0y to denote the decision subject utility associated with a negative decision (D = 0). As we showed291 above, requiring statistical parity can be equivalent to requiring the fulfillment of a utility-based group292 fairness criterion. However, even if the two criteria are equivalent, this is not necessarily true if we293 compare the group fairness metrics that specify the degree to which these two criteria are fulfilled, i.e.,294 if we compare the degree to which statistical parity is fulfilled with the degree to which a utility-based295 fairness metric is fulfilled:296 6Recall that utility weights are denoted by wdy , where both d and y can take the value 0 or 1. For simplicity, we use w1y as a placeholder for utility weights of all outcomes with a positive decision (d = 1) and for individuals of any type (y ∈ {0, 1}), i.e., w10 or w11. Corollary 3 (Partial fulfillment of statistical parity in terms of utility-based fairness). Suppose that297 the degree to which statistical parity is fulfilled is defined as the absolute difference in decision ratios298 across groups, i.e., |P (D = 1|A = 0) − P (D = 1|A = 1)|. If the utility weights of all possible299 outcomes do not depend on the group membership (wdy ⊥ a), and w11 = w10 ̸= w01 = w00 (i.e.,300 w1y ̸= w0y), and J = ∅, then the degree to which egalitarianism is fulfilled is equivalent to the301 degree to which statistical parity is fulfilled, multiplied by |w1y − w0y|.302 The formal proof of Corollary 3 can be found in Appendix B.1.2. Intuitively, Fegalitarianism, which is303 derived from the utility-based fairness approach and represents the degree to which egalitarianism is304 fulfilled, can be seen as the degree to which statistical parity is fulfilled, weighted by the absolute305 difference in utility for the decision received (decision subject utility for a positive versus a negative306 decision).307 Equality of opportunity (also called TPR parity) is defined as P (D = 1|Y = 1, A = 0) = P (D =308 1|Y = 1, A = 1), i.e., it requires parity of true positive rates (TPR) across groups a ∈ A [20].309 Proposition 4 (Equality of opportunity as utility-based fairness). If w11 and w01 do not depend on310 the group membership (wd1 ⊥ a), and w11 ̸= w01, then the egalitarian pattern fairness condition311 with J = Y and j = {1} is equivalent to equality of opportunity.312 The formal proof of Proposition 4 can be found in Appendix B.1.3. Compared to statistical parity,313 equality of opportunity only requires equal acceptance rates across those subgroups of A who are314 of type Y = 1. This corresponds to the claims differentiator j = {1} for J = Y . Thus, we simply315 require the utility weights w11 and w01 to be unequal and independent of a (which means that the316 utility weights w11 and w01 are constant across groups). As is the case for statistical parity, there are317 differences when looking at the degree to which the two notions of fairness are fulfilled (equality of318 opportunity and the utility-based fairness under the conditions specified in Proposition 4):319 Corollary 5 (Partial fulfillment of equality of opportunity in terms of utility-based fairness). Suppose320 that the degree to which equality of opportunity is fulfilled is defined as the absolute difference in321 decision ratios for individuals of type Y = 1 across groups, i.e., |P (D = 1|Y = 1, A = 0)−P (D =322 1|Y = 1, A = 1)|. If w11 and w01 do not depend on the group membership (wd1 ⊥ a), w11 ̸= w01,323 J = Y , and j = {1}, then the degree to which egalitarianism is fulfilled is equivalent to the degree324 to which equality of opportunity is fulfilled, multiplied by |(w11 − w01)|.325 The formal proof of Corollary 5 can be found in Appendix B.1.4.326 Predictive parity (also called PPV parity [9] or outcome test [51]) is defined as P (Y = 1|D = 1, A =327 0) = P (Y = 1|D = 1, A = 1), i.e., it requires parity of positive predictive value (PPV) rates across328 groups a ∈ A.329 Proposition 6 (Predictive parity as utility-based fairness). If w11 and w10 do not depend on the330 group membership (w1y ⊥ a), and w11 ̸= w10, then the egalitarian pattern fairness condition with331 J = D and j = {1} is equivalent to predictive parity.332 The formal proof of Proposition 6 can be found in Appendix B.1.5. Compared to equality of333 opportunity, predictive parity requires an equal share of individuals to be of type Y = 1 among334 those subgroups of A who receive the decision D = 1. This corresponds to the claims differentiator335 j = {1} for J = D. Thus, we simply require the utility weights w11 and w10 to be unequal and336 independent of a. As is the case for the other group fairness criteria, there are differences regarding337 the degree to which the two notions of fairness are fulfilled (predictive parity and the utility-based338 fairness under the conditions specified in Proposition 6):339 Corollary 7 (Partial fulfillment of predictive parity in terms of utility-based fairness). Suppose that340 the degree to which predictive parity is fulfilled is defined as the absolute difference in the ratio of341 individuals that are of type Y = 1 among all those that are assigned the decision D = 1 across342 groups, i.e., |P (Y = 1|D = 1, A = 0)− P (Y = 1|D = 1, A = 1)|. If w11 and w10 do not depend343 on the group membership (w1y ⊥ a), w11 ̸= w10, J = D, and j = {1}, then the degree to which344 egalitarianism is fulfilled is equivalent to the degree to which predictive parity is fulfilled, multiplied345 by |w11 − w10|.346 The formal proof of Corollary 7 can be found in Appendix B.1.6.347 Considering Table 1, we see that existing group fairness criteria have a narrow understanding of utility348 and do not tolerate inequalities, which can ultimately be harmful to already marginalized groups as349 previous work has shown [27]. Moreover, existing group fairness criteria embed assumptions about350 who has equal or different moral claims to utility. If we were to, for example, demand equalized351 odds for credit lending (where D is the bank’s decision to either approve a loan (D = 1) or reject it352 (D = 0), and Y is the loan applicant’s ability to repay the loan (Y = 1) or not (Y = 0)), we would353 make the following assumptions: People who are different in their ability to repay their loans have354 different claims to utility. We must thus equalize the expected utilities between people who are able355 to repay their loans and we must also equalize the expected utilities between people who are not356 able to repay their loans. However, the assumptions listed in Table 1 may not be met for all decision357 making systems. Our utility-based extension is thus necessary to implement other views of justice.358 5 Discussion359 As we have seen, existing group fairness criteria are special cases of our utility-based approach. This360 approach addresses several of the limitations of existing group fairness criteria that we discussed in361 Section 2.362 The "leveling down objection" The "leveling down objection" is a prevalent anti-egalitarianism363 argument [41, 17] saying that less inequality is not desirable if this requires lowering the better-off364 group’s welfare to match the one of the worse-off group. On this basis, choosing egalitarianism as the365 pattern of justice has been criticized in the algorithmic fairness literature (see, e.g., [36, 27, 54]). Our366 approach allows using other patterns of justice, such as maximin, prioritarianism, or sufficientarianism367 (see Section 3.3). Other patterns that can be formalized as mathematical formulas may also be used.368 One could, for example, combine several patterns into one and require equal expected utilities across369 groups as long as none of the groups is better off than it would be without any fairness requirement.370 This would represent a combination of egalitarianism and a group-specific baseline threshold (similar371 to sufficientarianism), making a "leveling down" of the better-off group impossible and adhering372 to the Pareto principle. Therefore, our approach links group fairness to a much larger part of the373 literature on distributive justice than current group fairness criteria.374 No consideration of consequences Existing group fairness criteria only consider the distribution375 of either D or Y . This could be interpreted as analyzing the distribution of utility but assuming376 that utility is equivalent to either D or Y instead of, for example, the combination of D and Y .377 Existing group fairness criteria thus represent a very confining definition of utility. Our approach378 acknowledges that the utility of the decision subjects does not only depend on the decision itself but379 also on other attributes such as one’s ability to repay a loan or one’s socioeconomic status (see, e.g.,380 [24, 54, 11]. This is represented through the utility function described in Section 3.1.381 Limited set of fairness definitions Previous attempts to guide stakeholders in choosing appropriate382 fairness criteria have taken on the form of explicit rules, such as in [45, 37, 44]. Such rules, however,383 presuppose a limited set of fairness definitions between which stakeholders can choose. Instead,384 we provide a method to construct ad-hoc fairness criteria that reflect the values decided on by the385 stakeholders by combining the definition of the utility function for decision subjects (Section 3.1), the386 relevant groups to compare (Section 3.2) and the pattern for a just distribution of utility (Section 3.3).387 Many important questions remain and may be the subject of future research: What are relevant trade-388 offs when imposing utility-based group fairness criteria as requirements? Optimal decision rules for389 existing group fairness criteria have been derived by [20, 16, 9] – do they change for the fairness390 criteria defined by our approach? Further, while our approach creates a link between group fairness391 and different theories of justice, it does not cover theories of distributive justice that are structurally392 different from the ones we discussed, e.g., Nozick’s entitlement theory [39]. It is unclear how such393 theories could be represented in formalized fairness criteria. Moreover, there is a risk that decision394 makers simply use our approach to bluewash their decision making system, which they may claim395 to be "fair" and "unbiased" after coming up with a fairness criterion that neatly fits their own goals.396 This is an issue with other fairness criteria as well. Therefore, it is important to make the process397 of defining fairness criteria accessible to the public, so that decision subjects can get involved and398 hold decision makers accountable. This raises the question: with utility functions being notoriously399 hard to define [49, 19], how could our approach be accessible enough for practical use? What may400 be needed is a process for eliciting values from stakeholders. One may object that this makes group401 fairness criteria similarly difficult to implement as individual fairness and counterfactual fairness.402 Our response to this is that existing group fairness criteria might seem easier to use, but they still403 embed values and assumptions about the context in which they are used. Our approach helps to make404 these assumptions explicit.405 References406 [1] Andrew Altman. 2020. Discrimination. In The Stanford Encyclopedia of Philosophy (Winter407 2020 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.408 [2] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. ProPublica409 (2016). https://www.propublica.org/article/machine-bias-risk-assessments-410 in-criminal-sentencing411 [3] Anonymous. 2022. A Justice-Based Framework for the Analysis of Algorithmic Fairness-Utility412 Trade-Offs. (2022). Unpublished manuscript.413 [4] Anonymous. 2022. Representative Individuals. (2022). Unpublished manuscript.414 [5] Richard Arneson. 2013. Egalitarianism. In The Stanford Encyclopedia of Philosophy (Summer415 2013 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.416 [6] Maria-Florina F Balcan, Travis Dick, Ritesh Noothigattu, and Ariel D Procaccia. 2019.417 Envy-Free Classification. In Advances in Neural Information Processing Systems, H. Wal-418 lach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.),419 Vol. 32. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2019/file/420 e94550c93cd70fe748e6982b3439ad3b-Paper.pdf421 [7] Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2020. Fairness and Machine Learning.422 http://fairmlbook.org Incomplete Working Draft.423 [8] Solon Barocas and Andrew D Selbst. 2016. Big Data’s Disparate Impact. California Law424 Review 104, 3 (2016), 671–732. http://www.jstor.org/stable/24758720425 [9] Joachim Baumann, Anikó Hannák, and Christoph Heitz. 2022. Enforcing Group Fairness in426 Algorithmic Decision Making: Utility Maximization Under Sufficiency. In Proceedings of the427 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22). Association428 for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3531146.429 3534645430 [10] Joachim Baumann and Christoph Heitz. 2022. Group Fairness in Prediction-Based Decision431 Making: From Moral Assessment to Implementation. In 2022 9th Swiss Conference on Data432 Science (forthcoming).433 [11] Reuben Binns. 2018. Fairness in Machine Learning: Lessons from Political Philosophy. In434 Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings435 of Machine Learning Research, Vol. 81), Sorelle A. Friedler and Christo Wilson (Eds.). PMLR,436 New York, NY, USA, 149–159. http://proceedings.mlr.press/v81/binns18a.html437 [12] Reuben Binns. 2020. On the apparent conflict between individual and group fairness. In438 Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 514–524.439 [13] Violet Xinying Chen and JN Hooker. 2022. Combining leximax fairness and efficiency in a440 mathematical programming model. European Journal of Operational Research 299, 1 (2022),441 235–248.442 [14] Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in443 recidivism prediction instruments. Big data 5, 2 (2017), 153–163.444 [15] A. Feder Cooper and Ellen Abrams. 2021. Emergent Unfairness in Algorithmic Fairness-445 Accuracy Trade-Off Research. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics,446 and Society (Virtual Event, USA) (AIES ’21). Association for Computing Machinery, New York,447 NY, USA, 46–54. https://doi.org/10.1145/3461702.3462519448 [16] Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic449 decision making and the cost of fairness. In Proceedings of the 23rd acm sigkdd international450 conference on knowledge discovery and data mining. 797–806.451 [17] Roger Crisp. 2003. Equality, Priority, and Compassion. 113, 4 (2003), 745–763. https:452 //doi.org/10.1086/373954453 [18] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012.454 Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer455 science conference. 214–226.456 [19] Charles Elkan. 2001. The Foundations of Cost-Sensitive Learning. In Proceedings of the457 17th International Joint Conference on Artificial Intelligence - Volume 2 (IJCAI’01). Morgan458 Kaufmann Publishers Inc., San Francisco, CA, USA, 973–978.459 [20] Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of opportunity in supervised460 learning. arXiv preprint arXiv:1610.02413 (2016).461 [21] Elisa Harlan and Oliver Schnuck. 2021. Objective or biased: On the questionable use of462 Artificial Intelligence for job applications. Bayerischer Rundfunk (BR) (2021). https:463 //interaktiv.br.de/ki-bewerbung/en/464 [22] Hoda Heidari, Claudio Ferrari, Krishna Gummadi, and Andreas Krause. 2018. Fairness behind465 a veil of ignorance: A welfare analysis for automated decision making. Advances in Neural466 Information Processing Systems 31 (2018).467 [23] Hoda Heidari, Michele Loi, Krishna P Gummadi, and Andreas Krause. 2019. A moral frame-468 work for understanding fair ML through economic models of equality of opportunity. In469 Proceedings of the Conference on Fairness, Accountability, and Transparency. 181–190.470 [24] Corinna Hertweck, Christoph Heitz, and Michele Loi. 2021. On the Moral Justification of471 Statistical Parity. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and472 Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New473 York, NY, USA, 747–757. https://doi.org/10.1145/3442188.3445936474 [25] Sune Holm. 2022. The Fairness in Algorithmic Fairness. Res Publica (2022), 1–17.475 [26] Nils Holtug. 2017. Prioritarianism. In Oxford Research Encyclopedia of Politics.476 [27] Lily Hu and Yiling Chen. 2020. Fair classification and social welfare. In Proceedings of the477 2020 Conference on Fairness, Accountability, and Transparency. 535–545.478 [28] Abigail Z Jacobs and Hanna Wallach. 2021. Measurement and fairness. In Proceedings of the479 2021 ACM Conference on Fairness, Accountability, and Transparency. 375–385.480 [29] Maximilian Kasy and Rediet Abebe. 2021. Fairness, equality, and power in algorithmic481 decision-making. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and482 Transparency. 576–586.483 [30] Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, and Gal Yona. 2019. Preference-484 Informed Fairness. CoRR abs/1904.01793 (2019). arXiv:1904.01793 http://arxiv.org/485 abs/1904.01793486 [31] Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Cass R Sunstein. 2019. Discrimination487 in the Age of Algorithms. Journal of Legal Analysis 10 (2019), 113–174. https://doi.org/488 10.1093/jla/laz001489 [32] Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent trade-offs in the490 fair determination of risk scores. arXiv preprint arXiv:1609.05807 (2016).491 [33] Matthias Kuppler, Christoph Kern, Ruben L. Bach, and Frauke Kreuter. 2021. Distributive492 Justice and Fairness Metrics in Automated Decision-making: How Much Overlap Is There?493 arXiv:2105.01441 [stat.ML]494 [34] Matt J Kusner, Joshua R Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness.495 arXiv preprint arXiv:1703.06856 (2017).496 [35] Christian List. 2022. Social Choice Theory. In The Stanford Encyclopedia of Philosophy497 (Spring 2022 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.498 [36] Michele Loi, Anders Herlitz, and Hoda Heidari. 2019. A Philosophical Theory of Fairness for499 Prediction-Based Decisions. Available at SSRN 3450300 (2019).500 [37] Karima Makhlouf, Sami Zhioua, and Catuscia Palamidessi. 2021. On the Applicability of501 Machine Learning Fairness Notions. SIGKDD Explor. Newsl. 23, 1 (may 2021), 14–23. https:502 //doi.org/10.1145/3468507.3468511503 [38] Arvind Narayanan. 2018. Translation tutorial: 21 fairness definitions and their politics. In504 Conference on Fairness, Accountability and Transparency.505 [39] Robert Nozick. 1974. Anarchy, state, and utopia. Vol. 5038. new york: Basic Books.506 [40] Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting507 racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019),508 447–453.509 [41] Derek Parfit. 1995. Equality or priority. Department of Philosophy, University of Kansas.510 [42] John Rawls. 1999. A Theory of Justice (2 ed.). Harvard University Press, Cambridge, Mas-511 sachussets.512 [43] John Rawls. 2001. Justice as fairness: A restatement. Harvard University Press.513 [44] Boris Ruf and Marcin Detyniecki. 2022. A Tool Bundle for AI Fairness in Practice. In CHI514 Conference on Human Factors in Computing Systems Extended Abstracts. 1–3.515 [45] Pedro Saleiro, Benedict Kuester, Loren Hinkson, Jesse London, Abby Stevens, Ari Anisfeld,516 Kit T Rodolfa, and Rayid Ghani. 2018. Aequitas: A bias and fairness audit toolkit. arXiv517 preprint arXiv:1811.05577 (2018).518 [46] Aaron Sankin, Dhruv Mehrotra, Surya Mattu, and Annie Gilbertson. 2021. Crime Prediction519 Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them. The520 Markup (2021). https://themarkup.org/prediction-bias/2021/12/02/crime-521 prediction-software-promised-to-be-free-of-biases-new-data-shows-it-522 perpetuates-them523 [47] Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet524 Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the525 conference on fairness, accountability, and transparency. 59–68.526 [48] Amartya Sen. 1980. Equality of what? The Tanner lecture on human values 1 (1980), 197–220.527 [49] Amartya Sen. 1985. The Standard of Living. The Tanner lecture on human values (1985).528 https://tannerlectures.utah.edu/_resources/documents/a-to-z/s/sen86.pdf529 [50] Liam Shields. 2020. Sufficientarianism. Philosophy Compass 15, 11 (2020), e12704. https:530 //doi.org/10.1111/phc3.12704531 [51] Camelia Simoiu, Sam Corbett-Davies, Sharad Goel, et al. 2017. The problem of infra-532 marginality in outcome tests for discrimination. The Annals of Applied Statistics 11, 3 (2017),533 1193–1216.534 [52] Till Speicher, Hoda Heidari, Nina Grgic-Hlaca, Krishna P. Gummadi, Adish Singla, Adrian535 Weller, and Muhammad Bilal Zafar. 2018. A Unified Approach to Quantifying Algorithmic536 Unfairness: Measuring Individual & Group Unfairness via Inequality Indices. In Proceedings537 of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining538 (London, United Kingdom) (KDD ’18). Association for Computing Machinery, New York, NY,539 USA, 2239–2248. https://doi.org/10.1145/3219819.3220046540 [53] Sahil Verma and Julia Rubin. 2018. Fairness definitions explained. In 2018 ieee/acm interna-541 tional workshop on software fairness (fairware). IEEE, 1–7.542 [54] Hilde Weerts, Lambèr Royakkers, and Mykola Pechenizkiy. 2022. Does the End Justify the543 Means? On the Moral Justification of Fairness-Aware Machine Learning. arXiv preprint544 arXiv:2202.08536 (2022).545 [55] Pak-Hang Wong. 2020. Democratizing Algorithmic Fairness. Philosophy & Technology 33, 2546 (2020), 225–244. https://doi.org/10.1007/s13347-019-00355-w547 [56] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi, and548 Adrian Weller. 2017. From Parity to Preference-Based Notions of Fairness in Classification. In549 Proceedings of the 31st International Conference on Neural Information Processing Systems550 (Long Beach, California, USA) (NIPS’17). Curran Associates Inc., Red Hook, NY, USA,551 228–238.552 Checklist553 1. For all authors...554 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s555 contributions and scope? [Yes]556 (b) Did you describe the limitations of your work? [Yes] The limitations are described in557 Section 5558 (c) Did you discuss any potential negative societal impacts of your work? [Yes] The559 potential negative effect of decision makers misusing our approach for bluewashing560 is briefly discussed in Section 5. However, it should be noted that this is a potential561 negative effect of all approaches to measuring fairness.562 (d) Have you read the ethics review guidelines and ensured that your paper conforms to563 them? [Yes]564 2. If you are including theoretical results...565 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See Sections 3,566 4, and B.567 (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix B.568 3. If you ran experiments...569 (a) Did you include the code, data, and instructions needed to reproduce the main experi-570 mental results (either in the supplemental material or as a URL)? [N/A]571 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they572 were chosen)? [N/A]573 (c) Did you report error bars (e.g., with respect to the random seed after running experi-574 ments multiple times)? [N/A]575 (d) Did you include the total amount of compute and the type of resources used (e.g., type576 of GPUs, internal cluster, or cloud provider)? [N/A]577 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...578 (a) If your work uses existing assets, did you cite the creators? [N/A]579 (b) Did you mention the license of the assets? [N/A]580 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]581 582 (d) Did you discuss whether and how consent was obtained from people whose data you’re583 using/curating? [N/A]584 (e) Did you discuss whether the data you are using/curating contains personally identifiable585 information or offensive content? [N/A]586 5. If you used crowdsourcing or conducted research with human subjects...587 (a) Did you include the full text of instructions given to participants and screenshots, if588 applicable? [N/A]589 (b) Did you describe any potential participant risks, with links to Institutional Review590 Board (IRB) approvals, if applicable? [N/A]591 (c) Did you include the estimated hourly wage paid to participants and the total amount592 spent on participant compensation? [N/A]593 A Existing group fairness criteria594 Here, we briefly introduce the most discussed group fairness criteria. Table 2 list the parity require-595 ments associated with these criteria. Statistical parity demands that the share of positive decisions596 is equal between socio-demographic groups (defined by the sensitive attribute A = {0, 1}) [18] –597 this is only required for a set of so-called legitimate attributes l ∈ L for the criterion conditional598 statistical parity [16]. Equality of opportunity, similarly, demands equal shares of positive decisions599 between socio-demographic groups, but only for those whose target variable is positive (Y = 1) [20]600 – thus, it is sometimes also referred to as true positive rate (TPR) parity. Equalized odds – sometimes601 also called separation – requires both equality of opportunity and FPR parity (which is similar to602 equality of opportunity, however, it is limited to individuals of type Y = 0). In contrast, predictive603 parity demands equal shares of individuals of type Y = 1 across socio-demographic groups, but only604 for those who received a positive decision D = 1 – thus, it is sometimes also referred to as positive605 predictive value (PPV) parity. Sufficiency requires both PPV parity and false omission rate (FOR)606 parity (which is similar to PPV parity, however, it is limited to individuals who received a negative607 decision D = 0).608 B Mapping existing group fairness criteria to our utility-based approach609 B.1 Omitted proofs610 B.1.1 Proof of Proposition 2611 Recall that the utility-based fairness following the pattern of egalitarianism requires equal expected612 utilities between groups:613 E(UDS |J = j, A = 0) = E(UDS |J = j, A = 1) (B.9) Since there is no claims differentiator (i.e., J = ∅), this can be simplified to:614 E(UDS |A = 0) = E(UDS |A = 1) (B.10) For w11 = w10 and w01 = w00, the decision subject utility (see Equation 1) is:615 uDS,i = w0y + (w1y − w0y) · di, (B.11) where w1y denotes the decision subject utility associated with a positive decision (D = 1) and w0y616 denotes the decision subject utility associated with a negative decision (D = 0). Thus, the expected617 utility for individuals of group a can be written as:618 E(UDS |A = a) = w0y + (w1y − w0y) · P (D = 1|A = a). (B.12) If the utility weights of all possible outcomes do not depend on the group membership (wdy ⊥ a), and619 w1y ̸= w0y7, then the utility-based fairness following the pattern of egalitarianism (see Equation B.10)620 requires:621 w0y + (w1y − w0y) · P (D = 1|A = 0) = w0y + (w1y − w0y) · P (D = 1|A = 1) ⇔ (w1y − w0y) · P (D = 1|A = 0) = (w1y − w0y) · P (D = 1|A = 1) ⇔ P (D = 1|A = 0) = P (D = 1|A = 1), (B.13) where the last line is identical to statistical parity.622 7If w1y = w0y , then the utility-based fairness following the pattern of egalitarianism would always be satisfied and the equivalence to statistical parity would not hold. B.1.2 Proof of Corollary 3623 Recall that the degree to which egalitarianism is fulfilled is defined as Fegalitarianism = |E(UDS |J =624 j, A = 0)− E(UDS |J = j, A = 1)| (see Equation 3). If the utility weights of all possible outcomes625 do not depend on the group membership (wdy ⊥ a), and w11 = w10 ̸= w01 = w00 (i.e., w1y ̸= w0y),626 J = ∅, this can be written as (see Equations B.10 and B.12):627 Fegalitarianism = | (w0y + (w1y − w0y) · P (D = 1|A = 0)) − (w0y + (w1y − w0y) · P (D = 1|A = 1)) | = | ((w1y − w0y) · P (D = 1|A = 0))− ((w1y − w0y) · P (D = 1|A = 1)) | = |(w1y − w0y) · (P (D = 1|A = 0)− P (D = 1|A = 1)) | (B.14) where the last line corresponds to a multiplication of |w1y − w0y| with the degree to which statistical628 parity is fulfilled.629 B.1.3 Proof of Proposition 4630 Recall that the utility-based fairness following the pattern of egalitarianism requires equal expected631 utilities between groups:632 E(UDS |J = j, A = 0) = E(UDS |J = j, A = 1) (B.15) Since the claims differentiator is the same as the attribute Y = 1, i.e., J = Y and the only morally633 relevant value of Y is 1 (i.e., j = {1}), this can be simplified to:634 E(UDS |Y = 1, A = 0) = E(UDS |Y = 1, A = 1) (B.16) For yi = 1, the decision subject utility (see Equation 1) is:635 uDS,i = w01 + (w11 − w01) · di. (B.17) Thus, the expected utility for individuals of type Y = 1 in group a can be written as:636 E(UDS |Y = 1, A = a) = w01 + (w11 − w01) · P (D = 1|Y = 1, A = a). (B.18) If w11 and w01 do not depend on the group membership (wd1 ⊥ a), and w11 ̸= w018, then the637 utility-based fairness following the pattern of egalitarianism (see Equation B.16) requires:638 w01 + (w11 − w01) · P (D = 1|Y = 1, A = 0) = w01 + (w11 − w01) · P (D = 1|Y = 1, A = 1) ⇔ (w11 − w01) · P (D = 1|Y = 1, A = 0) = (w11 − w01) · P (D = 1|Y = 1, A = 1) ⇔ P (D = 1|Y = 1, A = 0) = P (D = 1|Y = 1, A = 1), (B.19) where the last line is identical to equality of opportunity.639 B.1.4 Proof of Corollary 5640 Recall that the degree to which egalitarianism is fulfilled is defined as Fegalitarianism = |E(UDS |J =641 j, A = 0)− E(UDS |J = j, A = 1)| (see Equation 3). If w11 and w01 do not depend on the group642 membership (wd1 ⊥ a), w11 ̸= w01, J = Y , and j = {1}, this can be written as (see Equations B.16643 and B.18):644 Fegalitarianism = | (w01 + (w11 − w01) · P (D = 1|Y = 1, A = 0)) − (w01 + (w11 − w01) · P (D = 1|Y = 1, A = 1)) | = | ((w11 − w01) · P (D = 1|Y = 1, A = 0)) − ((w11 − w01) · P (D = 1|Y = 1, A = 1)) | = |(w11 − w01) · (P (D = 1|Y = 1, A = 0)− P (D = 1|Y = 1, A = 1)) | (B.20) where the last line corresponds to a multiplication of |w11 − w01| with the degree to which equality645 of opportunity is fulfilled.646 8If w11 = w01, then the utility-based fairness following the pattern of egalitarianism would always be satisfied and the equivalence to equality of opportunity would not hold. B.1.5 Proof of Proposition 6647 Recall that the utility-based fairness following the pattern of egalitarianism requires equal expected648 utilities between groups:649 E(UDS |J = j, A = 0) = E(UDS |J = j, A = 1) (B.21) Since the claims differentiator is the same as the decision D = 1, i.e., J = D and the only morally650 relevant value of D is 1 (i.e., j = {1}), this can be simplified to:651 E(UDS |D = 1, A = 0) = E(UDS |D = 1, A = 1) (B.22) For di = 1, the decision subject utility (see Equation 1) is:652 uDS,i = w10 + (w11 − w10) · yi. (B.23) Thus, the expected utility for individuals in group a that are assigned the decision D = 1 can be653 written as:654 E(UDS |D = 1, A = a) = w10 + (w11 − w10) · P (Y = 1|D = 1, A = a). (B.24) If w11 and w10 do not depend on the group membership (w1y ⊥ a), and w11 ̸= w109, then the655 utility-based fairness following the pattern of egalitarianism (see Equation B.22) requires:656 w10 + (w11 − w10) · P (Y = 1|D = 1, A = 0) = w10 + (w11 − w10) · P (Y = 1|D = 1, A = 1) ⇔ (w11 − w10) · P (Y = 1|D = 1, A = 0) = (w11 − w10) · P (Y = 1|D = 1, A = 1) ⇔ P (Y = 1|D = 1, A = 0) = P (Y = 1|D = 1, A = 1), (B.25) where the last line is identical to predictive parity.657 B.1.6 Proof of Corollary 7658 Recall that the degree to which egalitarianism is fulfilled is defined as Fegalitarianism = |E(UDS |J =659 j, A = 0)− E(UDS |J = j, A = 1)| (see Equation 3). If w11 and w10 do not depend on the group660 membership (w1y ⊥ a), w11 ̸= w10, J = D, and j = {1}, this can be written as (see Equations B.22661 and B.24):662 Fegalitarianism = | (w10 + (w11 − w10) · P (Y = 1|D = 1, A = 0)) − (w10 + (w11 − w10) · P (Y = 1|D = 1, A = 1)) | = | ((w11 − w10) · P (Y = 1|D = 1, A = 0)) − ((w11 − w10) · P (Y = 1|D = 1, A = 1)) | = |(w11 − w10) · (P (Y = 1|D = 1, A = 0)− P (Y = 1|D = 1, A = 1)) | (B.26) where the last line corresponds to a multiplication of |w11 −w10| with the degree to which predictive663 parity is fulfilled.664 B.2 Mapping to other group fairness criteria665 In Section 4, we mapped our utility-based approach to the three group fairness criteria statistical parity,666 equality of opportunity, and predictive parity. Here, we additionally show under which conditions our667 utility-based approach is equivalent to other group fairness criteria: conditional statistical parity, false668 positive rate parity, equalized odds, false omission rate parity, and sufficiency.669 B.2.1 Conditional statistical parity670 Conditional statistical parity is defined as P (D = 1|L = l, A = 0) = P (D = 1|L = l, A = 1),671 where L is what [16] refer to as the legitimate attributes. Thus, conditional statistical parity requires672 equality of acceptance rates across all subgroups in A = 0 and A = 1 who are equal in their value l673 for L, where L can be any (combination of) feature(s) besides D and A.674 9If w11 = w10, then the utility-based fairness following the pattern of egalitarianism would always be satisfied and the equivalence to predictive parity would not hold. Proposition 8 (Conditional statistical parity as utility-based fairness). If the utility weights of all675 possible outcomes do not depend on the group membership (wdy ⊥ a), and w11 = w10 ̸= w01 = w00,676 then the egalitarian pattern fairness condition with J = L is equivalent to conditional statistical parity.677 678 The proof of Proposition 8 is similar to the one of Proposition 2.679 Under these conditions, the degree to which Fegalitarianism is fulfilled is equivalent to the degree to680 which conditional statistical parity is fulfilled, multiplied by |w1y −w0y|. This could easily be proved681 – similar to the proof of Corollary 3 but with the conditions of the utility-based fairness stated in682 Proposition 8.683 B.2.2 False positive rate (FPR) parity684 FPR parity (also called predictive equality [16]) is defined as P (D = 1|Y = 0, A = 0) = P (D =685 1|Y = 0, A = 1), i.e., it requires parity of false positive rates (FPR) across groups a ∈ A.686 Proposition 9 (FPR parity as utility-based fairness). If w10 and w00 do not depend on the group687 membership (wd0 ⊥ a), and w10 ̸= w00, then the egalitarian pattern fairness condition with J = Y688 and j = {0} is equivalent to FPR parity.689 For yi = 0, the decision subject utility (see Equation 1) is:690 uDS,i = w00 + (w10 − w00) · di. (B.27) Thus, the expected utility for individuals of type Y = 0 in group a can
1. What is the main contribution of the paper regarding group fairness metrics? 2. What are the strengths of the proposed framework, particularly in its mathematical formulation and representation of various fairness metrics? 3. What are the weaknesses or limitations of the framework, and how does the reviewer suggest improving it? 4. Are there any practical methods or formalisms that are not covered by the unifying framework, and if so, why? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors propose a unifying framework for group fairness metrics which exposes different issues and justice patterns as choices in a mathematical equation. The key idea is to use explicitly the notion of utility (and expectation of it) in the definitions of fairness. By doing this they show that not only it is possible to represent all the most important fairness metrics in their formulation, but it is possible to construct new, more complex forms of fairness which address common criticisms of fairness criteria. Strengths And Weaknesses Strengths: The article aims to be formal, mathematical, and precise. Within this goal, it works well, and reads as easily as it would be expected in this context. The authors make a good job of pointing to other articles (included in the additional material) which have a deeper philosophical perspective and are, to some extent, easier for a less technical audience to read. Weaknesses: It would be really good to have the authors' view of what the drawbacks of their definition are. Often, unifying frameworks leave out some methods used in practice, and it would be great to have the authors' own view about what they have left behind. If nothing is left outside their framework, the authors' should clearly state that claim. Questions Are there fairness formalisms and practical methods which are not covered by the unifying framework? Which are those? Why? Limitations The authors should be more clear whether there are fairness formalisms which are not representable in their framework and why.
NIPS
Title Distributive Justice as the Foundational Premise of Fair ML: Unification, Extension, and Interpretation of Group Fairness Metrics Abstract Group fairness metrics are an established way of assessing the fairness of prediction1 based decision-making systems. However, these metrics are still insufficiently 2 linked to philosophical theories, and their moral meaning is often unclear. We 3 propose a general framework for analyzing the fairness of decision systems based 4 on theories of distributive justice, encompassing different established “patterns 5 of justice” that correspond to different normative positions. We show that the 6 most popular group fairness metrics can be interpreted as special cases of our 7 approach. Thus, we provide a unifying and interpretative framework for group 8 fairness metrics that reveals the normative choices associated with each of them 9 and that allows understanding their moral substance. At the same time, we provide 10 an extension of the space of possible fairness metrics beyond the ones currently 11 discussed in the fair ML literature. Our framework also allows overcoming several 12 limitations of group fairness metrics that have been criticized in the literature, most 13 notably (1) that they are parity-based, i.e., that they demand some form of equality 14 between groups, which may sometimes be harmful to marginalized groups, (2) that 15 they only compare decisions across groups, but not the resulting consequences for 16 these groups, and (3) that the full breadth of the distributive justice literature is not 17 sufficiently represented. 18 N/A Group fairness metrics are an established way of assessing the fairness of prediction-1 based decision-making systems. However, these metrics are still insufficiently2 linked to philosophical theories, and their moral meaning is often unclear. We3 propose a general framework for analyzing the fairness of decision systems based4 on theories of distributive justice, encompassing different established “patterns5 of justice” that correspond to different normative positions. We show that the6 most popular group fairness metrics can be interpreted as special cases of our7 approach. Thus, we provide a unifying and interpretative framework for group8 fairness metrics that reveals the normative choices associated with each of them9 and that allows understanding their moral substance. At the same time, we provide10 an extension of the space of possible fairness metrics beyond the ones currently11 discussed in the fair ML literature. Our framework also allows overcoming several12 limitations of group fairness metrics that have been criticized in the literature, most13 notably (1) that they are parity-based, i.e., that they demand some form of equality14 between groups, which may sometimes be harmful to marginalized groups, (2) that15 they only compare decisions across groups, but not the resulting consequences for16 these groups, and (3) that the full breadth of the distributive justice literature is not17 sufficiently represented.18 1 Introduction19 Supervised machine learning (ML) is increasingly being used for prediction-based decision making20 in various consequential applications, such as credit lending, school admission, and recruitment.21 Recent work has shown that the use of algorithms for decision making can reinforce existing biases22 or introduce new ones [8]. Consequently, fairness has emerged as an important desideratum for23 automated decision making. As recent cases in practice have shown, this is crucial in order to mitigate24 unjustified disadvantages towards certain demographic groups (see, e.g., [2, 46, 21, 40]). However,25 quantifying the fairness of decision making systems is not straightforward as any morally appropriate26 notion of fairness heavily depends on the given context.27 Many different measures have emerged in the algorithmic fairness literature to assess and mitigate28 unfairness towards marginalized groups in decision making systems. Many of the proposed notions of29 fairness are in the category of so-called group fairness criteria [7], some of which are mathematically30 incompatible in practice Kleinberg et al. [32], Chouldechova [14]. Therefore, satisfying such a31 fairness criterion comes at the expense of not being able to satisfy others Kleinberg et al. [31], Wong32 [55]. AllMost existing group fairness criteria demand equality of a certain value between different33 socio-demographic groups [12]. However, our framework is also compatible with other notions of34 Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute. fairness that concern groups of individuals, such as preference-based fairness [56, 30]. However,35 this stands, which is in contrast to the comparison of individuals, as it is done with other types of36 fairness such as individual fairness [18, 52], envy-freeness [6] or counterfactual fairness Kusner37 et al. [34]. Readers unfamiliar with group fairness may refer to [38, Chapter 2], [53], and [7] for an38 overview of the topic. We briefly introduce and formally define the most-discussed group fairness39 criteria in Appendix A.40 Much of the algorithmic fairness literature evolves around a limited set of group fairness metrics41 and is often not clearly linked to the many philosophical theories of justice that have been well-42 discussed. Kuppler et al. [33] find that there is little to no overlap between philosophical theories43 of justice and metrics in the algorithmic fairness literature and conclude that “apparently, the fair44 machine learning literature has not taken full advantage of the rich and longstanding literature on45 distributive justice” [33, p. 17]. Therefore, the definitions of group fairness could be described as46 quite narrow when viewed from a philosophical perspective. This becomes evident when thinking47 about an example: Group fairness metrics typically demand that groups are equal with respect to48 some metric. Demanding equality between groups often makes sense, but consider a case in which we49 could increase the utility of one group without harming another: Should we do this? While we cannot50 say that this is always a good idea, it at least seems to be a reasonable objection to group fairness51 metrics, which demand equality at all costs. Therefore, this paper asks whether group fairness metrics52 can be extended to compare groups in other ways.53 As of today, only a limited number of fairness metrics have been discussed, forcing stakeholders to54 choose between a set of pre-defined metrics that they then have to justify for their context. This paper,55 in contrast, presents a general framework for the derivation of targeted and context-specific fairness56 metrics, starting from values and moral views, and connects these to the philosophical literature, in57 particular to theories of distributive justice.58 Our main contributions can be summarized as follows:59 1. We propose a general framework for assessing the fairness of prediction-based decision60 systems, based on theories of distributive justice, and allowing for different established61 “patterns of justice” that correspond to different normative positions. The framework is62 based on an analysis of how utility is distributed between groups. “Pattern of justice” refers63 to normative ideas of what constitutes a just distribution.64 2. We show that the most popular group fairness metrics can be interpreted as special cases of65 this approach, which thus establishes a unifying framework that includes established metrics,66 but also shows how new ones can be constructed.67 We first present existing literature on group fairness (including its limitations) in Section 2. In68 Section 3, we present our unified framework for utility-based definitions of group fairness. We focus69 on the mathematical formalization of different aspects of the distributive justice literature while70 keeping the review of the philosophical side short. More details about the philosophical side can be71 found in the companion paper [3]. Section 4 then demonstrates that existing group fairness metrics72 are special cases of our utility-based approach. Finally, we discuss the implications of this and73 possible future work in Section 5.74 2 Limitations of current group fairness criteria75 Existing group fairness criteria pursue an egalitarian approach. This means that they demand equality76 of a certain value between different socio-demographic groups [12]. The fulfillment of these criteria77 is easy to assess, as this only requires access to a few variables (e.g., to check whether statistical78 parity is satisfied, we only need the decisions and the group membership of individuals). However,79 they also come with several limitations:80 The "leveling down objection" As has been shown by [27], in some cases, enforcing group81 fairness criteria can yield worse results for all groups in order to ensure parity between the groups.82 This is what is known as the "leveling down objection", which is often brought forward to challenge83 egalitarianism in philosophical literature [41, 17]: In a case in which equality requires us to worsen84 the outcomes for everyone, should we really demand equality or should we rather tolerate some85 inequalities? As criticized by Cooper and Abrams [15], Weerts et al. [54], existing definitions of86 group fairness lack this differentiation as they always minimize inequality.87 No consideration of consequences As pointed out by Hertweck et al. [24] and Weerts et al. [54],88 a large part of the existing work on fairness criteria seems to focus on an equal distribution of89 favorable decisions and not on the consequences of these decisions. Binns [11] notes that these90 criteria "[assume] a uniform valuation of decision outcomes across different populations" [11, p. 6],91 and notes that this assumption does not always hold. Whether a loan approval has a positive effect on92 one’s life or not arguably depends on one’s ability to repay this loan (and possibly on other individual93 attributes). This narrow focus on the algorithm’s decisions instead of its consequences makes it94 difficult to use existing group fairness criteria for a moral assessment of unfairness in decision making95 systems. Parity-based criteria that only consider the decisions but not their consequences do not allow96 us to deliberately give positive decisions to a larger share of the disadvantaged group as this would97 be a form of unequal treatment. However, Kasy and Abebe [29] argue that in such a case, unequal98 treatment can be required by justice to reduce overall inequalities. Several works have therefore taken99 a utility-based view of fairness. Heidari et al. [22]’s utility-based definitions of fairness focus on the100 effects of decisions while [13] developed a method that follows the Rawlsian leximin principle to101 increase the welfare of the worse off groups. However, none of them provides a general framework102 that encompasses different theories of distributive justice.103 Limited set of fairness definitions Another limitation of existing group fairness criteria is that104 they represent a limited set of alternatives. One has to choose one over the others, as they are105 mathematically incompatible [32, 14]. [47, 28] have highlighted that the criteria differ with respect106 to underlying moral values. Thus, solely choosing one among the limited set of criteria might fail107 adequately represent a morally appropriate definition of fairness for a given context. Heidari et al.108 [23] show how existing group fairness criteria can be viewed as instantiations of the equality of109 opportunity (EOP) principle. Similarly, [10] show that they can be viewed as special cases of a110 more general principle of fairness they call fair equality of chances (FEC). This way, they provide a111 framework through which the existing fairness criteria can be viewed. However, the conditions under112 which the existing fairness criteria map to EOP (or to FEC, respectively) are not always given. We113 cannot expect every application to fall neatly into one of these conditions and thus cannot expect to114 find a fitting fairness criterion among the ones already proposed in the group fairness literature.115 These more general notions of fairness might be suitable to grasp the different existing notions of116 group fairness. However, they do not adequately represent the complexity of the distributive justice117 literature Kuppler et al. [33]. In this paper, we want to bridge the gap between fair machine learning118 and philosophical theories of distributive justice.119 3 A framework for fairness evaluations based on distributive justice120 As discussed in Section 2, current group fairness criteria have some serious shortcomings. Clearly,121 they do not reflect the full breadth of the literature on distributive justice [33]. To address this issue122 (at least partially), we propose a utility-based extension of group fairness. This section introduces this123 approach from a rather technical perspective. More details on its links to the literature on distributive124 justice can be found in [3]. Our approach is based on the observation that each decision system125 creates a distribution of utility among individuals and groups. Theories of distributive justice are126 concerned with the question of when such a distribution can be considered just. As we will later127 show, some of these theories can be mapped to classical group fairness concepts from the fair ML128 literature (see Section 4).129 We consider a decision making system that takes binary decisions D on decision subjects DS of130 a given population P , based on a decision rule r. The decision rule assigns each individual i ∈ P131 a binary decision di ∈ {0, 1}, applying the decision rule to some input data, which includes an132 unknown but decision-relevant binary random variable Y . It does not matter how the decision rule133 functions. It could, for example, be an automated rule that takes decisions based on predictions of Y134 from an ML model or the decisions could be made by humans. We further assume that at least two135 social groups are defined, denoted with different values for the sensitive attribute A.136 3.1 Utility of the decision subjects137 As previously discussed, current definitions of group fairness only consider the decisions themselves,138 but not their consequences — even though the same decision could be beneficial for some and harmful139 for others [54]. Our approach explicitly considers the consequences of decisions, i.e., the resulting140 utility (or welfare), which could be positive in the case of a benefit or negative in the case of a harm.141 We model the consequences with a utility function u which, in our binary context, may depend on142 both the decision di and the value yi of Y .143 The utility uDS,i of a decision subject i is given by:144 uDS,i = w11 · di · yi + w10 · di · (1− yi) + w01 · (1− di) · yi + w00 · (1− di) · (1− yi), (1) where the utility weights wdy denote the four different utility values that might be realized for the145 four combinations of the random variables Y and D.1146 The utility uDS,i is a realization of a random variable UDS . For assessing the fairness of a decision147 rule, we are interested in systematic differences between groups. Our framework is based on the148 assumption that such differences correspond to differentThis means that we are interested in the149 expectation values E(UDS) of the individual utility, for different groups in A. Note that this is150 a normative choice and that other ways of comparing groups are imaginable, e.g., comparing their151 aggregated utilities.152 3.2 Relevant groups to compare153 Theories of distributive justice are typically concerned with individuals [48] while group fairness is154 concerned with socially salient groups. Group fairness focuses on comparisons of different groups155 as this is what theories of discrimination are concerned with [1]. This poses the question of how156 the comparison of individuals in distributive justice and the comparison of socially salient groups in157 group fairness can be combined? John Rawls’s concept of "relevant positions" [42, §16, pp. 81-86]158 is a concept that unites both ideas. We view "relevant positions" as the groups whose expected159 utility we want to compare and refer to them as the relevant groups (to compare).2 As defined in [3],160 relevant groups to compare have comparable moral claims3 to receive the same utility, but probably161 do not receive the same utility. Our approach thus views the theories of distributive justice, which we162 introduced in Section 2, from the perspective of relevant groups to compare.163 To be more specific, relevant groups are defined by two concepts: (1) claims differentiator J : What164 makes it the case that some people have the same claims to utility while others have different claims165 to utility?; (2) causes of inequality (resulting in socially salient groups A): What are the most likely166 causes of inequalities?167 As described in [3], the claims differentiator identifies people who have equal moral claims. In other168 words, the utility should be distributed equally between these people. This means we only consider169 people with equal claims for our fairness evaluation.4 Within the group of all individuals that have170 equal claims to utility (i.e., that are equal in their value for J) we specify groups that are unlikely171 to end up receiving equal utility, on average, based on the known causes of inequality (i.e., that are172 different in their value for A, which is sometimes also referred to as protected attribute). J and A173 define the relevant groups that group fairness criteria compare. For simplicity, we will assume that174 there are only two groups A = {0, 1} that are unlikely to receive the same utility. It is, for example,175 common to expect individuals of a different race or gender to not derive the same utility from decision176 systems.177 1In practice, however, one could use a much more specific utility function, using other attributes as well. A rather simple extension would also take A into account and define these four utility weights for each group separately. That should be supported by an analysis of the inequality generated in the transition between the decision space and the utility space between different (socially salient or other) groups. In philosophy and economics, the work of Amartya Sen explains why resources do not always convert into the same capabilities (options to be and do) [49, pp. 21-23]. 2This builds on Anonymous [4] which refers to relevant positions as “representative individuals”. 3For a philosophical analysis of comparable moral claims to a good, see [25]. 4This concept is similar to the justifier described in [36, 10]. In the next step, we want to compare the utilities of the relevant groups. Specifically, we will178 compare the expectation value of utility over all decisions made for a given population under a given179 a decision rule. We denote this as the expected utility that takes the relevant groups into account,180 E(UDS |J = j, A = a), where J denotes the claims differentiator and j corresponds to a possible181 value of the variable J , and a ∈ A denotes the different socially salient group to be compared with182 each other. In our framework, assessing fairness means comparing relevant groups with the same j,183 but different a, with respect to the distribution of utility.184 3.3 Patterns for a just distribution of utility185 The claims differentiator J tells us which individuals have equal moral claims to the utility distributed186 by the decision process. However, in some cases, an equal distribution of utility among the relevant187 groups (defined by J and A) may not be the primary concern for justice (see below). Our approach188 offers different choices, which we refer to as patterns of justice. For each of them, we will briefly189 explain their normative view of what constitutes justice. For each pattern, we formulate a fairness190 constraint and a fairness metric: A fairness constraint is a mathematical formalization of a pattern191 of justice, which can either be satisfied or not. A fairness metric F , on the other hand, can measure192 the degree to which this criterion is fulfilled. Note that we construct fairness metrics for a binary193 A = {0, 1}. Therefore, all patterns of justice that we present compare the expected utility of194 two relevant groups: A = 0 ∧ J = j (i.e., E(UDS |J = j, A = 0)) and A = 1 ∧ J = j (i.e.,195 E(UDS |J = j, A = 1)). However, the patterns of justice that we introduce here (egalitarianism,196 maximin, prioritarianism, sufficientiarianism) can easily be translated to cases of more groups.197 In the following, we introduce only a few patterns of justice (representing fairness principles for the198 allocation of goods) that are widely discussed in philosophical literature. However, our utility-based199 definition of group fairness should in no way be seen as limited to these patterns. Our approach200 can easily be extended to other patterns of justice and one may also implement their own pattern of201 justice. Our goal here is simply to highlight a few popular patterns of justice and how they can be202 embedded in our approach.203 3.3.1 Egalitarianism204 Egalitarianism – as the name suggests – demands equality [5]. Egalitarianism as a broad concept does205 not, however, specify what should be equalized. This is subject of the equality of what debate initiated206 by Sen [48]. One could, for example, aim to equalize the opportunities (equality of opportunity) or207 outcomes (equality of outcomes).208 Fairness criterion The egalitarian fairness criterion is satisfied if the expected utility is equal for209 the relevant groups:210 E(UDS |J = j, A = 0) = E(UDS |J = j, A = 1) (2) Fairness metric The degree to which egalitarianism is fulfilled is measured as the absolute differ-211 ence between the two groups’ expected utilities (lower values are better):5212 Fegalitarianism = |E(UDS |J = j, A = 0)− E(UDS |J = j, A = 1)| (3) 3.3.2 Maximin213 Maximin describes the principle that among a set of possible distributions, the one that maximizes214 the expected utility of the relevant group that is worst-off should be chosen [35]. In contrast to215 egalitarianism, inequalities are thus tolerated if the worst-off group benefits from them. This has been216 defended by Rawls in the form of the "difference principle" [42, 43].217 Fairness criterion The maximin fairness criterion is satisfied if there is no other possible distribu-218 tion that would lead to a greater expected utility of the worst-off relevant group, which we denote by219 Uworst−offDS = mina∈A ( E(UDS |J = j, A = a) ) . It thus requires that the decision rule r′ (which220 5Here, we consider the absolute difference in expected utilities. Alternatively, we could also consider the ratio of the two expected utilities. represents the decision taken for each individual) results in a Uworst−offDS (r ′) that is greater or equal221 than the Uworst−offDS (r) for any other decision rule r from the set of all possible decision rules R:222 Uworst−offDS (r ′) ≥ maxr∈R ( Uworst−offDS (r) ) (4) Fairness metric The degree to which maximin is fulfilled is measured as the value of the lowest223 expected utility between all relevant groups (higher values are better):224 Fmaximin = mina∈A ( E(UDS |J = j, A = a) ) (5) 3.3.3 Prioritarianism225 Prioritarianism describes the principle that among a set of possible distributions, the one that maxi-226 mizes the weighted sum of utilities across all people [26]. In contrast to egalitarianism, inequalities227 are thus tolerated if they increase this weighted sum of expected utilities. In this weighted sum, the228 expected utility of the worst-off relevant groups is given a higher weight (the maximin principle can229 be seen as the extreme version of this as an infinite weight is given to the worst-off relevant groups).230 Fairness criterion The prioritarian fairness criterion is satisfied if there is no other possible231 distribution that would lead to a greater overall expected utility, which is measured as a weighted232 aggregation of the relevant groups’ expected utilities, where the expected utility of the worst-off233 relevant group is given a higher weight. It thus requires that the decision rule r′ results in a weighted234 utility ŨDS(r′) = k ·Uworst−offDS (r′) +U better−off DS (r ′) that is greater or equal than the ŨDS(r) for235 any other decision rule r from the set of all possible decision rules R:236 ŨDS(r ′) ≥ maxr∈R ( ŨDS(r) ) , (6) where ŨDS denotes the sum of decision subject utilities for all groups with a weight k > 1 applied to237 the worst-off group.238 Fairness metric The degree to which prioritarianism is fulfilled is measured as an aggregate of the239 (weighted) expected utilities (higher values are better):240 Fprioritarianism = k ·min ( E(UDS |J = j, A = 0), E(UDS |J = j, A = 1) ) +max ( E(UDS |J = j, A = 0), E(UDS |J = j, A = 1) ) (7) 3.3.4 Sufficientarianism241 Sufficientarianism [50] describes the principle that there is a minimum threshold of utility that should242 be reached by everyone in expectation. Inequalities between relevant groups above this minimum243 threshold are acceptable according to this principle. Inequalities are thus tolerated as long as all244 groups achieve a minimum level of utility in expectation.245 Fairness criterion The sufficientarian fairness criterion is satisfied if all groups’ expected utilities246 are above a given threshold t:247 ∀a ∈ A E(UDS |J = j, A = a)(r′) ≥ t (8) Fairness metric The degree to which sufficientarianism is fulfilled is measured as the number of248 groups whose expected utility is above the given threshold t (higher values are better):249 Fsufficientarianism = ∑ a∈A Ta, where Ta = { 1, if E(UDS |J = j, A = a) ≥ t 0, otherwise 3.4 Extension of group fairness250 Based on the mathematical framework outlined in this section, we suggest an extension of the251 current understanding of group fairness as described in Section 2. Instead of seeing group fairness252 as demanding equality between socio-demographic groups with respect to some value, we instead253 propose the following definition:254 Definition 1 (Group fairness). Group fairness is the just distribution of utility among relevant groups.255 What makes a distribution just depends on the pattern of justice. Thus, our extended understanding256 of group fairness does not necessarily require equal expected utilities across groups. Furthermore,257 our definition ensures that only relevant groups are being compared (in the most familiar case, these258 correspond to socio-demographic groups).259 Group fairness criteria, in our sense, specify when group fairness is satisfied by a decision-making260 system. From this, it follows that there are more group fairness criteria than previously acknowledged.261 This extension of group fairness criteria alleviates some of the criticisms of currently popular group262 fairness criteria as we will show in Section 5.263 4 Relation to existing group fairness criteria264 Existing group fairness criteria are special cases of the utility-based extension we propose. In this265 section, we formally show under which conditions our approach maps to existing group fairness266 criteria (see Table 1 for a summary of the results). In particular, we look at well-known group267 fairness criteria: (conditional) statistical parity, equality of opportunity, false positive rate (FPR)268 parity, equalized odds, predictive parity, false omission rate (FOR) parity, and sufficiency. The269 mathematical definitions of these criteria can be found in Table 2 in Appendix A. Furthermore, we270 show how the utility-based group fairness metrics relate to existing ones. In this section, we only271 demonstrate when our utility-based approach results in one of three often discussed group fairness272 criteria: statistical parity, equality of opportunity, and predictive parity. We refer the interested reader273 to the Appendix B.2 where we provide a similar mapping for other existing group fairness criteria.274 The findings we present in this section extend the ones of [23], [36], and [10]. While [23] consider275 the distribution of undeserved utility (what they call the difference between an individual’s actual and276 effort-based utility), [36] and [10] use the decision subject utility UDS to derive a morally appropriate277 group fairness definition. This is similar to our approach presented in this paper; however, they only278 consider two options UDS = D and UDS = Y , while our approach allows for arbitrary functions f279 for the utility: UDS = f(D,Y ) .280 Statistical parity (also called demographic parity or group fairness [18]) is defined as P (D = 1|A =281 0) = P (D = 1|A = 1). For specific decision subject utility weights wdy and without any claims282 differentiator J , the condition of our utility-based fairness criteria derived from our framework is283 equivalent to statistical parity:284 Proposition 2 (Statistical parity as utility-based fairness). If the utility weights of all possible285 outcomes (as described in Section 3.1) do not depend on the group membership (wdy ⊥ a), and286 w11 = w10 ̸= w01 = w00, then the egalitarian pattern fairness condition with J = ∅ is equivalent to287 statistical parity.288 The formal proof of Proposition 2 can be found in Appendix B.1.1.289 We use w1y6 to denote the decision subject utility associated with a positive decision (D = 1) and290 w0y to denote the decision subject utility associated with a negative decision (D = 0). As we showed291 above, requiring statistical parity can be equivalent to requiring the fulfillment of a utility-based group292 fairness criterion. However, even if the two criteria are equivalent, this is not necessarily true if we293 compare the group fairness metrics that specify the degree to which these two criteria are fulfilled, i.e.,294 if we compare the degree to which statistical parity is fulfilled with the degree to which a utility-based295 fairness metric is fulfilled:296 6Recall that utility weights are denoted by wdy , where both d and y can take the value 0 or 1. For simplicity, we use w1y as a placeholder for utility weights of all outcomes with a positive decision (d = 1) and for individuals of any type (y ∈ {0, 1}), i.e., w10 or w11. Corollary 3 (Partial fulfillment of statistical parity in terms of utility-based fairness). Suppose that297 the degree to which statistical parity is fulfilled is defined as the absolute difference in decision ratios298 across groups, i.e., |P (D = 1|A = 0) − P (D = 1|A = 1)|. If the utility weights of all possible299 outcomes do not depend on the group membership (wdy ⊥ a), and w11 = w10 ̸= w01 = w00 (i.e.,300 w1y ̸= w0y), and J = ∅, then the degree to which egalitarianism is fulfilled is equivalent to the301 degree to which statistical parity is fulfilled, multiplied by |w1y − w0y|.302 The formal proof of Corollary 3 can be found in Appendix B.1.2. Intuitively, Fegalitarianism, which is303 derived from the utility-based fairness approach and represents the degree to which egalitarianism is304 fulfilled, can be seen as the degree to which statistical parity is fulfilled, weighted by the absolute305 difference in utility for the decision received (decision subject utility for a positive versus a negative306 decision).307 Equality of opportunity (also called TPR parity) is defined as P (D = 1|Y = 1, A = 0) = P (D =308 1|Y = 1, A = 1), i.e., it requires parity of true positive rates (TPR) across groups a ∈ A [20].309 Proposition 4 (Equality of opportunity as utility-based fairness). If w11 and w01 do not depend on310 the group membership (wd1 ⊥ a), and w11 ̸= w01, then the egalitarian pattern fairness condition311 with J = Y and j = {1} is equivalent to equality of opportunity.312 The formal proof of Proposition 4 can be found in Appendix B.1.3. Compared to statistical parity,313 equality of opportunity only requires equal acceptance rates across those subgroups of A who are314 of type Y = 1. This corresponds to the claims differentiator j = {1} for J = Y . Thus, we simply315 require the utility weights w11 and w01 to be unequal and independent of a (which means that the316 utility weights w11 and w01 are constant across groups). As is the case for statistical parity, there are317 differences when looking at the degree to which the two notions of fairness are fulfilled (equality of318 opportunity and the utility-based fairness under the conditions specified in Proposition 4):319 Corollary 5 (Partial fulfillment of equality of opportunity in terms of utility-based fairness). Suppose320 that the degree to which equality of opportunity is fulfilled is defined as the absolute difference in321 decision ratios for individuals of type Y = 1 across groups, i.e., |P (D = 1|Y = 1, A = 0)−P (D =322 1|Y = 1, A = 1)|. If w11 and w01 do not depend on the group membership (wd1 ⊥ a), w11 ̸= w01,323 J = Y , and j = {1}, then the degree to which egalitarianism is fulfilled is equivalent to the degree324 to which equality of opportunity is fulfilled, multiplied by |(w11 − w01)|.325 The formal proof of Corollary 5 can be found in Appendix B.1.4.326 Predictive parity (also called PPV parity [9] or outcome test [51]) is defined as P (Y = 1|D = 1, A =327 0) = P (Y = 1|D = 1, A = 1), i.e., it requires parity of positive predictive value (PPV) rates across328 groups a ∈ A.329 Proposition 6 (Predictive parity as utility-based fairness). If w11 and w10 do not depend on the330 group membership (w1y ⊥ a), and w11 ̸= w10, then the egalitarian pattern fairness condition with331 J = D and j = {1} is equivalent to predictive parity.332 The formal proof of Proposition 6 can be found in Appendix B.1.5. Compared to equality of333 opportunity, predictive parity requires an equal share of individuals to be of type Y = 1 among334 those subgroups of A who receive the decision D = 1. This corresponds to the claims differentiator335 j = {1} for J = D. Thus, we simply require the utility weights w11 and w10 to be unequal and336 independent of a. As is the case for the other group fairness criteria, there are differences regarding337 the degree to which the two notions of fairness are fulfilled (predictive parity and the utility-based338 fairness under the conditions specified in Proposition 6):339 Corollary 7 (Partial fulfillment of predictive parity in terms of utility-based fairness). Suppose that340 the degree to which predictive parity is fulfilled is defined as the absolute difference in the ratio of341 individuals that are of type Y = 1 among all those that are assigned the decision D = 1 across342 groups, i.e., |P (Y = 1|D = 1, A = 0)− P (Y = 1|D = 1, A = 1)|. If w11 and w10 do not depend343 on the group membership (w1y ⊥ a), w11 ̸= w10, J = D, and j = {1}, then the degree to which344 egalitarianism is fulfilled is equivalent to the degree to which predictive parity is fulfilled, multiplied345 by |w11 − w10|.346 The formal proof of Corollary 7 can be found in Appendix B.1.6.347 Considering Table 1, we see that existing group fairness criteria have a narrow understanding of utility348 and do not tolerate inequalities, which can ultimately be harmful to already marginalized groups as349 previous work has shown [27]. Moreover, existing group fairness criteria embed assumptions about350 who has equal or different moral claims to utility. If we were to, for example, demand equalized351 odds for credit lending (where D is the bank’s decision to either approve a loan (D = 1) or reject it352 (D = 0), and Y is the loan applicant’s ability to repay the loan (Y = 1) or not (Y = 0)), we would353 make the following assumptions: People who are different in their ability to repay their loans have354 different claims to utility. We must thus equalize the expected utilities between people who are able355 to repay their loans and we must also equalize the expected utilities between people who are not356 able to repay their loans. However, the assumptions listed in Table 1 may not be met for all decision357 making systems. Our utility-based extension is thus necessary to implement other views of justice.358 5 Discussion359 As we have seen, existing group fairness criteria are special cases of our utility-based approach. This360 approach addresses several of the limitations of existing group fairness criteria that we discussed in361 Section 2.362 The "leveling down objection" The "leveling down objection" is a prevalent anti-egalitarianism363 argument [41, 17] saying that less inequality is not desirable if this requires lowering the better-off364 group’s welfare to match the one of the worse-off group. On this basis, choosing egalitarianism as the365 pattern of justice has been criticized in the algorithmic fairness literature (see, e.g., [36, 27, 54]). Our366 approach allows using other patterns of justice, such as maximin, prioritarianism, or sufficientarianism367 (see Section 3.3). Other patterns that can be formalized as mathematical formulas may also be used.368 One could, for example, combine several patterns into one and require equal expected utilities across369 groups as long as none of the groups is better off than it would be without any fairness requirement.370 This would represent a combination of egalitarianism and a group-specific baseline threshold (similar371 to sufficientarianism), making a "leveling down" of the better-off group impossible and adhering372 to the Pareto principle. Therefore, our approach links group fairness to a much larger part of the373 literature on distributive justice than current group fairness criteria.374 No consideration of consequences Existing group fairness criteria only consider the distribution375 of either D or Y . This could be interpreted as analyzing the distribution of utility but assuming376 that utility is equivalent to either D or Y instead of, for example, the combination of D and Y .377 Existing group fairness criteria thus represent a very confining definition of utility. Our approach378 acknowledges that the utility of the decision subjects does not only depend on the decision itself but379 also on other attributes such as one’s ability to repay a loan or one’s socioeconomic status (see, e.g.,380 [24, 54, 11]. This is represented through the utility function described in Section 3.1.381 Limited set of fairness definitions Previous attempts to guide stakeholders in choosing appropriate382 fairness criteria have taken on the form of explicit rules, such as in [45, 37, 44]. Such rules, however,383 presuppose a limited set of fairness definitions between which stakeholders can choose. Instead,384 we provide a method to construct ad-hoc fairness criteria that reflect the values decided on by the385 stakeholders by combining the definition of the utility function for decision subjects (Section 3.1), the386 relevant groups to compare (Section 3.2) and the pattern for a just distribution of utility (Section 3.3).387 Many important questions remain and may be the subject of future research: What are relevant trade-388 offs when imposing utility-based group fairness criteria as requirements? Optimal decision rules for389 existing group fairness criteria have been derived by [20, 16, 9] – do they change for the fairness390 criteria defined by our approach? Further, while our approach creates a link between group fairness391 and different theories of justice, it does not cover theories of distributive justice that are structurally392 different from the ones we discussed, e.g., Nozick’s entitlement theory [39]. It is unclear how such393 theories could be represented in formalized fairness criteria. Moreover, there is a risk that decision394 makers simply use our approach to bluewash their decision making system, which they may claim395 to be "fair" and "unbiased" after coming up with a fairness criterion that neatly fits their own goals.396 This is an issue with other fairness criteria as well. Therefore, it is important to make the process397 of defining fairness criteria accessible to the public, so that decision subjects can get involved and398 hold decision makers accountable. This raises the question: with utility functions being notoriously399 hard to define [49, 19], how could our approach be accessible enough for practical use? What may400 be needed is a process for eliciting values from stakeholders. One may object that this makes group401 fairness criteria similarly difficult to implement as individual fairness and counterfactual fairness.402 Our response to this is that existing group fairness criteria might seem easier to use, but they still403 embed values and assumptions about the context in which they are used. Our approach helps to make404 these assumptions explicit.405 References406 [1] Andrew Altman. 2020. Discrimination. In The Stanford Encyclopedia of Philosophy (Winter407 2020 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.408 [2] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. ProPublica409 (2016). https://www.propublica.org/article/machine-bias-risk-assessments-410 in-criminal-sentencing411 [3] Anonymous. 2022. A Justice-Based Framework for the Analysis of Algorithmic Fairness-Utility412 Trade-Offs. (2022). Unpublished manuscript.413 [4] Anonymous. 2022. Representative Individuals. (2022). Unpublished manuscript.414 [5] Richard Arneson. 2013. Egalitarianism. In The Stanford Encyclopedia of Philosophy (Summer415 2013 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.416 [6] Maria-Florina F Balcan, Travis Dick, Ritesh Noothigattu, and Ariel D Procaccia. 2019.417 Envy-Free Classification. In Advances in Neural Information Processing Systems, H. Wal-418 lach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.),419 Vol. 32. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2019/file/420 e94550c93cd70fe748e6982b3439ad3b-Paper.pdf421 [7] Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2020. Fairness and Machine Learning.422 http://fairmlbook.org Incomplete Working Draft.423 [8] Solon Barocas and Andrew D Selbst. 2016. Big Data’s Disparate Impact. California Law424 Review 104, 3 (2016), 671–732. http://www.jstor.org/stable/24758720425 [9] Joachim Baumann, Anikó Hannák, and Christoph Heitz. 2022. Enforcing Group Fairness in426 Algorithmic Decision Making: Utility Maximization Under Sufficiency. In Proceedings of the427 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22). Association428 for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3531146.429 3534645430 [10] Joachim Baumann and Christoph Heitz. 2022. Group Fairness in Prediction-Based Decision431 Making: From Moral Assessment to Implementation. In 2022 9th Swiss Conference on Data432 Science (forthcoming).433 [11] Reuben Binns. 2018. Fairness in Machine Learning: Lessons from Political Philosophy. In434 Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings435 of Machine Learning Research, Vol. 81), Sorelle A. Friedler and Christo Wilson (Eds.). PMLR,436 New York, NY, USA, 149–159. http://proceedings.mlr.press/v81/binns18a.html437 [12] Reuben Binns. 2020. On the apparent conflict between individual and group fairness. In438 Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 514–524.439 [13] Violet Xinying Chen and JN Hooker. 2022. Combining leximax fairness and efficiency in a440 mathematical programming model. European Journal of Operational Research 299, 1 (2022),441 235–248.442 [14] Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in443 recidivism prediction instruments. Big data 5, 2 (2017), 153–163.444 [15] A. Feder Cooper and Ellen Abrams. 2021. Emergent Unfairness in Algorithmic Fairness-445 Accuracy Trade-Off Research. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics,446 and Society (Virtual Event, USA) (AIES ’21). Association for Computing Machinery, New York,447 NY, USA, 46–54. https://doi.org/10.1145/3461702.3462519448 [16] Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic449 decision making and the cost of fairness. In Proceedings of the 23rd acm sigkdd international450 conference on knowledge discovery and data mining. 797–806.451 [17] Roger Crisp. 2003. Equality, Priority, and Compassion. 113, 4 (2003), 745–763. https:452 //doi.org/10.1086/373954453 [18] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012.454 Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer455 science conference. 214–226.456 [19] Charles Elkan. 2001. The Foundations of Cost-Sensitive Learning. In Proceedings of the457 17th International Joint Conference on Artificial Intelligence - Volume 2 (IJCAI’01). Morgan458 Kaufmann Publishers Inc., San Francisco, CA, USA, 973–978.459 [20] Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of opportunity in supervised460 learning. arXiv preprint arXiv:1610.02413 (2016).461 [21] Elisa Harlan and Oliver Schnuck. 2021. Objective or biased: On the questionable use of462 Artificial Intelligence for job applications. Bayerischer Rundfunk (BR) (2021). https:463 //interaktiv.br.de/ki-bewerbung/en/464 [22] Hoda Heidari, Claudio Ferrari, Krishna Gummadi, and Andreas Krause. 2018. Fairness behind465 a veil of ignorance: A welfare analysis for automated decision making. Advances in Neural466 Information Processing Systems 31 (2018).467 [23] Hoda Heidari, Michele Loi, Krishna P Gummadi, and Andreas Krause. 2019. A moral frame-468 work for understanding fair ML through economic models of equality of opportunity. In469 Proceedings of the Conference on Fairness, Accountability, and Transparency. 181–190.470 [24] Corinna Hertweck, Christoph Heitz, and Michele Loi. 2021. On the Moral Justification of471 Statistical Parity. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and472 Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New473 York, NY, USA, 747–757. https://doi.org/10.1145/3442188.3445936474 [25] Sune Holm. 2022. The Fairness in Algorithmic Fairness. Res Publica (2022), 1–17.475 [26] Nils Holtug. 2017. Prioritarianism. In Oxford Research Encyclopedia of Politics.476 [27] Lily Hu and Yiling Chen. 2020. Fair classification and social welfare. In Proceedings of the477 2020 Conference on Fairness, Accountability, and Transparency. 535–545.478 [28] Abigail Z Jacobs and Hanna Wallach. 2021. Measurement and fairness. In Proceedings of the479 2021 ACM Conference on Fairness, Accountability, and Transparency. 375–385.480 [29] Maximilian Kasy and Rediet Abebe. 2021. Fairness, equality, and power in algorithmic481 decision-making. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and482 Transparency. 576–586.483 [30] Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, and Gal Yona. 2019. Preference-484 Informed Fairness. CoRR abs/1904.01793 (2019). arXiv:1904.01793 http://arxiv.org/485 abs/1904.01793486 [31] Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Cass R Sunstein. 2019. Discrimination487 in the Age of Algorithms. Journal of Legal Analysis 10 (2019), 113–174. https://doi.org/488 10.1093/jla/laz001489 [32] Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent trade-offs in the490 fair determination of risk scores. arXiv preprint arXiv:1609.05807 (2016).491 [33] Matthias Kuppler, Christoph Kern, Ruben L. Bach, and Frauke Kreuter. 2021. Distributive492 Justice and Fairness Metrics in Automated Decision-making: How Much Overlap Is There?493 arXiv:2105.01441 [stat.ML]494 [34] Matt J Kusner, Joshua R Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness.495 arXiv preprint arXiv:1703.06856 (2017).496 [35] Christian List. 2022. Social Choice Theory. In The Stanford Encyclopedia of Philosophy497 (Spring 2022 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.498 [36] Michele Loi, Anders Herlitz, and Hoda Heidari. 2019. A Philosophical Theory of Fairness for499 Prediction-Based Decisions. Available at SSRN 3450300 (2019).500 [37] Karima Makhlouf, Sami Zhioua, and Catuscia Palamidessi. 2021. On the Applicability of501 Machine Learning Fairness Notions. SIGKDD Explor. Newsl. 23, 1 (may 2021), 14–23. https:502 //doi.org/10.1145/3468507.3468511503 [38] Arvind Narayanan. 2018. Translation tutorial: 21 fairness definitions and their politics. In504 Conference on Fairness, Accountability and Transparency.505 [39] Robert Nozick. 1974. Anarchy, state, and utopia. Vol. 5038. new york: Basic Books.506 [40] Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting507 racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019),508 447–453.509 [41] Derek Parfit. 1995. Equality or priority. Department of Philosophy, University of Kansas.510 [42] John Rawls. 1999. A Theory of Justice (2 ed.). Harvard University Press, Cambridge, Mas-511 sachussets.512 [43] John Rawls. 2001. Justice as fairness: A restatement. Harvard University Press.513 [44] Boris Ruf and Marcin Detyniecki. 2022. A Tool Bundle for AI Fairness in Practice. In CHI514 Conference on Human Factors in Computing Systems Extended Abstracts. 1–3.515 [45] Pedro Saleiro, Benedict Kuester, Loren Hinkson, Jesse London, Abby Stevens, Ari Anisfeld,516 Kit T Rodolfa, and Rayid Ghani. 2018. Aequitas: A bias and fairness audit toolkit. arXiv517 preprint arXiv:1811.05577 (2018).518 [46] Aaron Sankin, Dhruv Mehrotra, Surya Mattu, and Annie Gilbertson. 2021. Crime Prediction519 Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them. The520 Markup (2021). https://themarkup.org/prediction-bias/2021/12/02/crime-521 prediction-software-promised-to-be-free-of-biases-new-data-shows-it-522 perpetuates-them523 [47] Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet524 Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the525 conference on fairness, accountability, and transparency. 59–68.526 [48] Amartya Sen. 1980. Equality of what? The Tanner lecture on human values 1 (1980), 197–220.527 [49] Amartya Sen. 1985. The Standard of Living. The Tanner lecture on human values (1985).528 https://tannerlectures.utah.edu/_resources/documents/a-to-z/s/sen86.pdf529 [50] Liam Shields. 2020. Sufficientarianism. Philosophy Compass 15, 11 (2020), e12704. https:530 //doi.org/10.1111/phc3.12704531 [51] Camelia Simoiu, Sam Corbett-Davies, Sharad Goel, et al. 2017. The problem of infra-532 marginality in outcome tests for discrimination. The Annals of Applied Statistics 11, 3 (2017),533 1193–1216.534 [52] Till Speicher, Hoda Heidari, Nina Grgic-Hlaca, Krishna P. Gummadi, Adish Singla, Adrian535 Weller, and Muhammad Bilal Zafar. 2018. A Unified Approach to Quantifying Algorithmic536 Unfairness: Measuring Individual & Group Unfairness via Inequality Indices. In Proceedings537 of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining538 (London, United Kingdom) (KDD ’18). Association for Computing Machinery, New York, NY,539 USA, 2239–2248. https://doi.org/10.1145/3219819.3220046540 [53] Sahil Verma and Julia Rubin. 2018. Fairness definitions explained. In 2018 ieee/acm interna-541 tional workshop on software fairness (fairware). IEEE, 1–7.542 [54] Hilde Weerts, Lambèr Royakkers, and Mykola Pechenizkiy. 2022. Does the End Justify the543 Means? On the Moral Justification of Fairness-Aware Machine Learning. arXiv preprint544 arXiv:2202.08536 (2022).545 [55] Pak-Hang Wong. 2020. Democratizing Algorithmic Fairness. Philosophy & Technology 33, 2546 (2020), 225–244. https://doi.org/10.1007/s13347-019-00355-w547 [56] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi, and548 Adrian Weller. 2017. From Parity to Preference-Based Notions of Fairness in Classification. In549 Proceedings of the 31st International Conference on Neural Information Processing Systems550 (Long Beach, California, USA) (NIPS’17). Curran Associates Inc., Red Hook, NY, USA,551 228–238.552 Checklist553 1. For all authors...554 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s555 contributions and scope? [Yes]556 (b) Did you describe the limitations of your work? [Yes] The limitations are described in557 Section 5558 (c) Did you discuss any potential negative societal impacts of your work? [Yes] The559 potential negative effect of decision makers misusing our approach for bluewashing560 is briefly discussed in Section 5. However, it should be noted that this is a potential561 negative effect of all approaches to measuring fairness.562 (d) Have you read the ethics review guidelines and ensured that your paper conforms to563 them? [Yes]564 2. If you are including theoretical results...565 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See Sections 3,566 4, and B.567 (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix B.568 3. If you ran experiments...569 (a) Did you include the code, data, and instructions needed to reproduce the main experi-570 mental results (either in the supplemental material or as a URL)? [N/A]571 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they572 were chosen)? [N/A]573 (c) Did you report error bars (e.g., with respect to the random seed after running experi-574 ments multiple times)? [N/A]575 (d) Did you include the total amount of compute and the type of resources used (e.g., type576 of GPUs, internal cluster, or cloud provider)? [N/A]577 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...578 (a) If your work uses existing assets, did you cite the creators? [N/A]579 (b) Did you mention the license of the assets? [N/A]580 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]581 582 (d) Did you discuss whether and how consent was obtained from people whose data you’re583 using/curating? [N/A]584 (e) Did you discuss whether the data you are using/curating contains personally identifiable585 information or offensive content? [N/A]586 5. If you used crowdsourcing or conducted research with human subjects...587 (a) Did you include the full text of instructions given to participants and screenshots, if588 applicable? [N/A]589 (b) Did you describe any potential participant risks, with links to Institutional Review590 Board (IRB) approvals, if applicable? [N/A]591 (c) Did you include the estimated hourly wage paid to participants and the total amount592 spent on participant compensation? [N/A]593 A Existing group fairness criteria594 Here, we briefly introduce the most discussed group fairness criteria. Table 2 list the parity require-595 ments associated with these criteria. Statistical parity demands that the share of positive decisions596 is equal between socio-demographic groups (defined by the sensitive attribute A = {0, 1}) [18] –597 this is only required for a set of so-called legitimate attributes l ∈ L for the criterion conditional598 statistical parity [16]. Equality of opportunity, similarly, demands equal shares of positive decisions599 between socio-demographic groups, but only for those whose target variable is positive (Y = 1) [20]600 – thus, it is sometimes also referred to as true positive rate (TPR) parity. Equalized odds – sometimes601 also called separation – requires both equality of opportunity and FPR parity (which is similar to602 equality of opportunity, however, it is limited to individuals of type Y = 0). In contrast, predictive603 parity demands equal shares of individuals of type Y = 1 across socio-demographic groups, but only604 for those who received a positive decision D = 1 – thus, it is sometimes also referred to as positive605 predictive value (PPV) parity. Sufficiency requires both PPV parity and false omission rate (FOR)606 parity (which is similar to PPV parity, however, it is limited to individuals who received a negative607 decision D = 0).608 B Mapping existing group fairness criteria to our utility-based approach609 B.1 Omitted proofs610 B.1.1 Proof of Proposition 2611 Recall that the utility-based fairness following the pattern of egalitarianism requires equal expected612 utilities between groups:613 E(UDS |J = j, A = 0) = E(UDS |J = j, A = 1) (B.9) Since there is no claims differentiator (i.e., J = ∅), this can be simplified to:614 E(UDS |A = 0) = E(UDS |A = 1) (B.10) For w11 = w10 and w01 = w00, the decision subject utility (see Equation 1) is:615 uDS,i = w0y + (w1y − w0y) · di, (B.11) where w1y denotes the decision subject utility associated with a positive decision (D = 1) and w0y616 denotes the decision subject utility associated with a negative decision (D = 0). Thus, the expected617 utility for individuals of group a can be written as:618 E(UDS |A = a) = w0y + (w1y − w0y) · P (D = 1|A = a). (B.12) If the utility weights of all possible outcomes do not depend on the group membership (wdy ⊥ a), and619 w1y ̸= w0y7, then the utility-based fairness following the pattern of egalitarianism (see Equation B.10)620 requires:621 w0y + (w1y − w0y) · P (D = 1|A = 0) = w0y + (w1y − w0y) · P (D = 1|A = 1) ⇔ (w1y − w0y) · P (D = 1|A = 0) = (w1y − w0y) · P (D = 1|A = 1) ⇔ P (D = 1|A = 0) = P (D = 1|A = 1), (B.13) where the last line is identical to statistical parity.622 7If w1y = w0y , then the utility-based fairness following the pattern of egalitarianism would always be satisfied and the equivalence to statistical parity would not hold. B.1.2 Proof of Corollary 3623 Recall that the degree to which egalitarianism is fulfilled is defined as Fegalitarianism = |E(UDS |J =624 j, A = 0)− E(UDS |J = j, A = 1)| (see Equation 3). If the utility weights of all possible outcomes625 do not depend on the group membership (wdy ⊥ a), and w11 = w10 ̸= w01 = w00 (i.e., w1y ̸= w0y),626 J = ∅, this can be written as (see Equations B.10 and B.12):627 Fegalitarianism = | (w0y + (w1y − w0y) · P (D = 1|A = 0)) − (w0y + (w1y − w0y) · P (D = 1|A = 1)) | = | ((w1y − w0y) · P (D = 1|A = 0))− ((w1y − w0y) · P (D = 1|A = 1)) | = |(w1y − w0y) · (P (D = 1|A = 0)− P (D = 1|A = 1)) | (B.14) where the last line corresponds to a multiplication of |w1y − w0y| with the degree to which statistical628 parity is fulfilled.629 B.1.3 Proof of Proposition 4630 Recall that the utility-based fairness following the pattern of egalitarianism requires equal expected631 utilities between groups:632 E(UDS |J = j, A = 0) = E(UDS |J = j, A = 1) (B.15) Since the claims differentiator is the same as the attribute Y = 1, i.e., J = Y and the only morally633 relevant value of Y is 1 (i.e., j = {1}), this can be simplified to:634 E(UDS |Y = 1, A = 0) = E(UDS |Y = 1, A = 1) (B.16) For yi = 1, the decision subject utility (see Equation 1) is:635 uDS,i = w01 + (w11 − w01) · di. (B.17) Thus, the expected utility for individuals of type Y = 1 in group a can be written as:636 E(UDS |Y = 1, A = a) = w01 + (w11 − w01) · P (D = 1|Y = 1, A = a). (B.18) If w11 and w01 do not depend on the group membership (wd1 ⊥ a), and w11 ̸= w018, then the637 utility-based fairness following the pattern of egalitarianism (see Equation B.16) requires:638 w01 + (w11 − w01) · P (D = 1|Y = 1, A = 0) = w01 + (w11 − w01) · P (D = 1|Y = 1, A = 1) ⇔ (w11 − w01) · P (D = 1|Y = 1, A = 0) = (w11 − w01) · P (D = 1|Y = 1, A = 1) ⇔ P (D = 1|Y = 1, A = 0) = P (D = 1|Y = 1, A = 1), (B.19) where the last line is identical to equality of opportunity.639 B.1.4 Proof of Corollary 5640 Recall that the degree to which egalitarianism is fulfilled is defined as Fegalitarianism = |E(UDS |J =641 j, A = 0)− E(UDS |J = j, A = 1)| (see Equation 3). If w11 and w01 do not depend on the group642 membership (wd1 ⊥ a), w11 ̸= w01, J = Y , and j = {1}, this can be written as (see Equations B.16643 and B.18):644 Fegalitarianism = | (w01 + (w11 − w01) · P (D = 1|Y = 1, A = 0)) − (w01 + (w11 − w01) · P (D = 1|Y = 1, A = 1)) | = | ((w11 − w01) · P (D = 1|Y = 1, A = 0)) − ((w11 − w01) · P (D = 1|Y = 1, A = 1)) | = |(w11 − w01) · (P (D = 1|Y = 1, A = 0)− P (D = 1|Y = 1, A = 1)) | (B.20) where the last line corresponds to a multiplication of |w11 − w01| with the degree to which equality645 of opportunity is fulfilled.646 8If w11 = w01, then the utility-based fairness following the pattern of egalitarianism would always be satisfied and the equivalence to equality of opportunity would not hold. B.1.5 Proof of Proposition 6647 Recall that the utility-based fairness following the pattern of egalitarianism requires equal expected648 utilities between groups:649 E(UDS |J = j, A = 0) = E(UDS |J = j, A = 1) (B.21) Since the claims differentiator is the same as the decision D = 1, i.e., J = D and the only morally650 relevant value of D is 1 (i.e., j = {1}), this can be simplified to:651 E(UDS |D = 1, A = 0) = E(UDS |D = 1, A = 1) (B.22) For di = 1, the decision subject utility (see Equation 1) is:652 uDS,i = w10 + (w11 − w10) · yi. (B.23) Thus, the expected utility for individuals in group a that are assigned the decision D = 1 can be653 written as:654 E(UDS |D = 1, A = a) = w10 + (w11 − w10) · P (Y = 1|D = 1, A = a). (B.24) If w11 and w10 do not depend on the group membership (w1y ⊥ a), and w11 ̸= w109, then the655 utility-based fairness following the pattern of egalitarianism (see Equation B.22) requires:656 w10 + (w11 − w10) · P (Y = 1|D = 1, A = 0) = w10 + (w11 − w10) · P (Y = 1|D = 1, A = 1) ⇔ (w11 − w10) · P (Y = 1|D = 1, A = 0) = (w11 − w10) · P (Y = 1|D = 1, A = 1) ⇔ P (Y = 1|D = 1, A = 0) = P (Y = 1|D = 1, A = 1), (B.25) where the last line is identical to predictive parity.657 B.1.6 Proof of Corollary 7658 Recall that the degree to which egalitarianism is fulfilled is defined as Fegalitarianism = |E(UDS |J =659 j, A = 0)− E(UDS |J = j, A = 1)| (see Equation 3). If w11 and w10 do not depend on the group660 membership (w1y ⊥ a), w11 ̸= w10, J = D, and j = {1}, this can be written as (see Equations B.22661 and B.24):662 Fegalitarianism = | (w10 + (w11 − w10) · P (Y = 1|D = 1, A = 0)) − (w10 + (w11 − w10) · P (Y = 1|D = 1, A = 1)) | = | ((w11 − w10) · P (Y = 1|D = 1, A = 0)) − ((w11 − w10) · P (Y = 1|D = 1, A = 1)) | = |(w11 − w10) · (P (Y = 1|D = 1, A = 0)− P (Y = 1|D = 1, A = 1)) | (B.26) where the last line corresponds to a multiplication of |w11 −w10| with the degree to which predictive663 parity is fulfilled.664 B.2 Mapping to other group fairness criteria665 In Section 4, we mapped our utility-based approach to the three group fairness criteria statistical parity,666 equality of opportunity, and predictive parity. Here, we additionally show under which conditions our667 utility-based approach is equivalent to other group fairness criteria: conditional statistical parity, false668 positive rate parity, equalized odds, false omission rate parity, and sufficiency.669 B.2.1 Conditional statistical parity670 Conditional statistical parity is defined as P (D = 1|L = l, A = 0) = P (D = 1|L = l, A = 1),671 where L is what [16] refer to as the legitimate attributes. Thus, conditional statistical parity requires672 equality of acceptance rates across all subgroups in A = 0 and A = 1 who are equal in their value l673 for L, where L can be any (combination of) feature(s) besides D and A.674 9If w11 = w10, then the utility-based fairness following the pattern of egalitarianism would always be satisfied and the equivalence to predictive parity would not hold. Proposition 8 (Conditional statistical parity as utility-based fairness). If the utility weights of all675 possible outcomes do not depend on the group membership (wdy ⊥ a), and w11 = w10 ̸= w01 = w00,676 then the egalitarian pattern fairness condition with J = L is equivalent to conditional statistical parity.677 678 The proof of Proposition 8 is similar to the one of Proposition 2.679 Under these conditions, the degree to which Fegalitarianism is fulfilled is equivalent to the degree to680 which conditional statistical parity is fulfilled, multiplied by |w1y −w0y|. This could easily be proved681 – similar to the proof of Corollary 3 but with the conditions of the utility-based fairness stated in682 Proposition 8.683 B.2.2 False positive rate (FPR) parity684 FPR parity (also called predictive equality [16]) is defined as P (D = 1|Y = 0, A = 0) = P (D =685 1|Y = 0, A = 1), i.e., it requires parity of false positive rates (FPR) across groups a ∈ A.686 Proposition 9 (FPR parity as utility-based fairness). If w10 and w00 do not depend on the group687 membership (wd0 ⊥ a), and w10 ̸= w00, then the egalitarian pattern fairness condition with J = Y688 and j = {0} is equivalent to FPR parity.689 For yi = 0, the decision subject utility (see Equation 1) is:690 uDS,i = w00 + (w10 − w00) · di. (B.27) Thus, the expected utility for individuals of type Y = 0 in group a can
1. What is the main contribution of the paper regarding group fairness metrics? 2. What are the strengths and weaknesses of the proposed approach in bridging the gap between philosophical ideas and fair ML? 3. Do you have any concerns or questions about the interpretation of group fairness in the paper? 4. How does the reviewer assess the clarity and technical aspects of the paper's content? 5. Are there any limitations in the paper that the reviewer identifies?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper presents a general framework for understanding group fairness metrics that is connected to relevant literature in philosophy underpinning the relevant values and moral perspectives. Specifically, the paper proposes to interpret group fairness as the “just distribution of utility among relevant groups”, as opposed to simply “equality (of decisions) between socio-demographic groups”. This highlights the several components of the proposed approach: utility (as opposed to decisions) some notion of justice (as opposed to necessarily equality) comparing relevant groups (as opposed to simply demographic groups) In slightly more detail: The starting point of the proposed approach is to focus on the utility of the individuals (here referred to as decision subjects) and to introduce the concept of a “claims differentiator” J, which identifies individuals with equal claims utility for the sake of the evaluation. Technically, this is a binary variable where now fairness means comparing the expected utilities of individuals with the same J=j but different group membership A=a. There is then the question of how to compare these quantities - here comes the component of “pattern of justice”. The paper considers several such patterns studied in the philosophical literature and studies their implications in terms of the implied fairness metrics. For example, “egalitarianism” would seek to equalize the above-mentioned quantities, but other principles (Maximin, Prioritarianism) could require different things. Strengths And Weaknesses Strengths: The paper is generally clearly written, and bridging the gap between philosophical ideas and fair ML is an important objective. Weaknesses: The main weakness of the paper in my opinion is that the main contributions of the paper are w.r.t the authors’ portrayal of the notion of group fairness (“equality (of decisions) between socio-demographic groups”), which I’m not sure is a actually fair w.r.t the current state of the literature. Specifically: Confusion between group fairness as a metric and group fairness as an objective: Even with the narrow interpretation of equality of outcomes (or some other statistic that depends on the outcomes), an important distinction in the literature is between group fairness as a metric and group fairness as an objective. I (and I believe many of my colleagues) view group notions of fairness as an important red flag, that if not satisfied, suggests one should look into the entire algorithmic pipeline for issues - be it the problem specification, the data collection, etc. In particular this means that the “solution” may be something like collecting better data, which does not require making any group’s welfare worse off. A narrow interpretation of group fairness: There are many more notions of fairness studied in the literature that (i) utility based, (ii) that take preferences into account, or (iii) that speak to notions of fairness that do not necessarily involve parity (such as making sure no groups is “worse off” than what it would be had there been no other groups). None of them were mentioned in the paper, and I found the perspective that the proposed unifying view is novel to be hindered by this. Another weakness is that while the approach sets out to expose the normative and moral assumptions under-pinning group fairness, there is a bit of a “pick and choose” nature to this, as some assumptions which I think are not inherent (and encode some moral assumptions) are taken for granted and not discussed. For example, it is stated that since we are interested in systematic differences between groups, fairness only means we are “interested in the expectation value E(U_DS) of the individual utility”. Isn’t this also an assumption? Two groups may very well be systematically different even if their expectations are similar (but their variance, for example, is not). Does this not come up in the philosophical literature? Questions Minor points: The notation U_{DS,i} seems overly complex, why not simply use u_i? I couldn’t follow some of the choices for the degree to which fairness is fulfilled. For example in Maximin section, the choice of (5) seemed a bit weird - in particular, wouldn’t it be non-zero even if fairness is met? I would expect something like the difference in the terms in Eq (4). Technically I think it would be helpful to clarify some of the mathematical notation. For example the notion of independence is used throughout for what I understand are realizations (e.g. w_dy independent of a in Table 1), and I’m not sure this is well-defined. Limitations Yes.
NIPS
Title Bayesian Optimization with Unknown Search Space Abstract Applying Bayesian optimization in problems wherein the search space is unknown is challenging. To address this problem, we propose a systematic volume expansion strategy for the Bayesian optimization. We devise a strategy to guarantee that in iterative expansions of the search space, our method can find a point whose function value within of the objective function maximum. Without the need to specify any parameters, our algorithm automatically triggers a minimal expansion required iteratively. We derive analytic expressions for when to trigger the expansion and by how much to expand. We also provide theoretical analysis to show that our method achieves -accuracy after a finite number of iterations. We demonstrate our method on both benchmark test functions and machine learning hyper-parameter tuning tasks and demonstrate that our method outperforms baselines. 1 Introduction Choosing where to search matters. A time-tested path in the quest for new products or processes is through experimental optimization. Bayesian optimization offers a sample efficient strategy for experimental design by optimizing expensive black-box functions [9–11]. But one problem is that users need to specify a bounded region to restrict the search of the objective function extrema. When tackling a completely new problem, users do not have prior knowledge, hence there is no guarantee that an arbitrarily defined search space contains the global optimum. Thus application of the Bayesian optimization framework when the search region is unknown remains an open challenge [16]. One approach is to use a regularized acquisition function such that its maximum can never be at infinity - hence no search space needs to be declared and an unconstrained optimizer can be used [16]. Other approaches use volume expansion, i.e. starting from the user-defined region, the search space is expanded during the optimization. The simplest strategy is to repeatedly double the volume of the search space every several iterations [16]. Nguyen et al suggest a volume expansion strategy based on the evaluation budget [12]. All these methods require users to specify critical parameters - as example, regularization parameters [16], or growth rate, expansion frequency (volume doubling) [16] or budget [12]. These parameters are difficult to specify in practice. Additionally, [12] is computationally expensive and the user-defined search space needs to be close to the global optimum. In this paper, we propose a systematic volume expansion strategy for the Bayesian optimization framework wherein the search space is unknown. Without any prior knowledge about the objective function argmax or strict assumptions on the behavior of the objective function, it is impossible to guarantee the global convergence when the search space is continuously expanded. To circumvent this problem, we consider the setting where we achieve the global -accuracy condition, that is, we aim to find a point whose function value is within of the objective function global maximum. Our volume expansion strategy is based on two guiding principles: 1) The algorithm can reach a point whose function value is within of the objective function maximum in one expansion, and, 2) the search space should be minimally expanded so that the algorithm does not spend unnecessary 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. evaluations near the search space boundary. As the objective function is unknown, it is not possible to compute this ideal expansion region. Using the GP-UCB acquisition function as a surrogate, this region is computed as one that contains at least one point whose acquisition function value is within of the acquisition function maximum. However, by using a surrogate to approximate the objective function, there is no guarantee that we can achieve the global -accuracy within one expansion. Hence multiple expansions are required, and a new expansion is triggered when the local -accuracy is satisfied, i.e. when the algorithm can find a point whose function value is within of the objective function maximum in the current search space. Analytical expressions for the size of the new expansion space and when to trigger the expansion are derived. The guarantees for the -accuracy condition, however, now lapses in the expanded region, and so we adjust the acquisition function appropriately to maintain the guarantee. Finally, we provide theoretical analysis to show that our proposed method achieves the global -accuracy condition after a finite number of iterations. We demonstrate our algorithm on five synthetic benchmark functions and three real hyperparameter tuning tasks for common machine learning models: linear regression with elastic net, multilayer perceptron and convolutional neural network. Our experimental results show that our method achieves better function values with fewer samples compared to state-of-the-art approaches. In summary, our contributions are: • Formalising the analysis for Bayesian optimization framework in an unknown search space setting, and introducing -accuracy as a way to track the algorithmic performance; • Providing analytic expressions for how far to expand the search space and when to expand the search space to achieve global -accuracy; • Deriving theoretical global -accuracy convergence; and, • Demonstrating our algorithm on both synthetic and real-world problems and comparing it against state-of-the-art methods. Our method differs from previous works in that 1) our method does not require any algorithmic parameters, automatically adjusting both when to trigger the expansion and by how much to expand, and, 2) our approach is the only one to guarantee the global -accuracy condition. This is because we guarantee the local -accuracy condition in each search space, thus eventually the global - accuracy is achieved. Without this local guarantee, the suggested solution cannot be guaranteed to reach global -accuracy. The regularization [16] and the filtering method [12] require the global optimum to be within a bound constructed by either the user specified regularizer or the budget. The volume doubling method [16] can continue to expand the search space to infinity, however, the local -accuracy condition is not guaranteed in each search space. The paper is organized as follows. Section 2 gives an overview of Bayesian optimization and discusses some of the related work. Section 3 describes the problem setup. Section 4 proposes our new expansion strategy for the Bayesian optimization framework when the search space is unknown. A theoretical analysis for our proposed method is presented in Section 5. In Section 6, we demonstrate the effectiveness of our algorithm by numerical experiments. Finally, Section 7 concludes the paper. 2 Background and Related Work 2.1 Background Bayesian optimization is a powerful optimization method to find the global optimum of an unknown objective function f(x) by sequential queries [9–11, 17, 18]. First, at time t, a surrogate model is used to approximate the behaviour of f(x) using all the current observed data Dt−1 = {(xi, yi)}ni=1, yi = f(xi) + ξi, where ξi ∼ N (0, σ2) is the noise. Second, an acquisition function is constructed from the surrogate model that suggests the next point xitr to be evaluated. The objective function is then evaluated at xitr and the new data point (xitr, yitr) is added to Dt−1. These steps are conducted in an iterative manner to get the best estimate of the global optimum. The most common choice for the surrogate model used in Bayesian optimization is the Gaussian Process (GP) [14]. Assume the function f follows a GP with mean function m0(x) and covariance function k(x, x′), the posterior distribution of f given the observed data Dt−1 = {(xi, yi)}ni=1 is a GP with the following posterior mean and variance, µt−1(x) = m0(x) + k|Dt−1|(x) T (K|Dt−1| + σ 2I|Dt−1|) −1y|Dt−1|, σ2t−1(x) = k(x, x)− k|Dt−1|(x) T (K|Dt−1| + σ 2I|Dt−1|) −1k|Dt−1|(x), (1) where y|Dt−1| = [y1, . . . , y|Dt−1|] T , k|Dt−1|(x) = [k(x, xi)] |Dt−1| i=1 , K|Dt−1| = [k(xi, xj)]i,j , I|Dt−1| is the |Dt−1| × |Dt−1| identity matrix and |Dt−1| denotes the cardinality of Dt−1. To aid readability, in the sequel we remove the notation that shows the dependence of k,K, I, y on |Dt−1|. There are many existing acquisition functions [6, 7, 10, 11, 20] and in this paper, we focus only on the GP-UCB acquisition function [1, 2, 5, 19]. The GP-UCB acquisition function is defined as, αUCB(x;Dt−1) = µt−1(x) + √ βtσt−1(x), (2) where µt−1(x), σt−1(x) are the posterior mean and standard deviation of the GP given observed data Dt−1 and βt ≥ 0 is an appropriate parameter that balances the exploration and exploitation. Given a search domain, {βt} can be chosen as in [19] to ensure global convergence in this domain. 2.2 Related Work All the work related to the problem of Bayesian optimization with unknown search space have been described in Section 1. There is the work in [3] introduces the term -accuracy. However, their purpose is to unify the Bayesian optimization and the Level-set estimation framework. 3 Problem Setup We wish to find the global argmax xmax of an unknown objective function f : Rd 7→ R, whose argmax is at a finite location, i.e. xmax = argmaxx∈S∗ f(x), (3) where S∗ is a finite region that contains the argmax of the function f(x). In practice, the region S∗ is not known in advance, so users need to identify a search domain Suser which is likely to contain the argmax of f(x). This search domain can be set arbitrarily or based on limited prior knowledge. Thus there is no guarantee that Suser contains the global optimum of the objective function. In the trivial cases when the search space S∗ is known or when S∗ ⊂ Suser, the global convergence can be guaranteed through classical analysis [4, 19]. Here, we consider the general case when S∗ may or may not be a subset of Suser. Without any prior knowledge about S∗ or strict assumptions on the behavior of the objective function, it is impossible to guarantee the global convergence. Therefore, in this work, instead of solving Eq. (3), we consider the setting where we achieve the global -accuracy condition. That is, for a small positive value , we find a solution x which satisfies, f(xmax)− f(x ) ≤ . (4) 4 Proposed Approach We make some mild assumptions to develop our main results. Assumption 4.1 The prior mean function m0(x) = 0. This is done by subtracting the mean from all observations and is common practice. Assumption 4.2 The kernel k(x, x′) satisfies, (1) when ‖x − x′‖2 → +∞, k(x, x′) → 0; (2) k(x, x′) ≤ 1 ∀(x, x′) ; (3) k(x, x) = θ2, where θ ≥ 0 is the scale factor of the kernel function. Various kernels satisfy Assumption 4.2, e.g. the Matérn kernel, the Square Exponential kernel. As the function can always be re-scaled, condition 2 is met without loss of generality [15, 19]. Defining gk(γ): With these types of kernels, for all small positive γ, there always exists gk(γ) > 0, ∀x, x′ : ‖x− x′‖2 ≥ gk(γ), k(x, x′) ≤ γ. (5) The value of gk(γ) can be computed from γ and the kernel covariance function k(x, x′) i.e. for Squared Exponential kernel kSE(x, x′) = θ2exp(−‖x− x′‖22/(2l2)), gk(γ) will be √ 2l2log(θ2/γ). Assumption 4.3 The kernel k(x, x′) is known in advance or can be learned from the observations. 4.1 Proposed Expansion Strategy The ideal expansion strategy should satisfy two characteristics: 1) The algorithm can reach the global -accuracy condition in one expansion, and, 2) the search space should be minimally expanded so that the algorithm does not spend unnecessary evaluations near the search space boundary. Since we have a black-box objective function, it is not possible to compute the ideal expansion space Sideal directly. Let the exploration-exploitation parameters {βt} be chosen to ensure the objective function is upper bounded by the GP-UCB acquisition function with high probability. Then we can estimate Sideal by a region S as a minimal region that contains at least one point whose acquisition function value is within from the acquisition function maximum, i.e. ∃xu ∈ S : |αUCB(xu;Dt−1)−maxx∈Rd αUCB(x;Dt−1)| ≤ . Due to the approximation, there is no guarantee we can achieve the global -accuracy in one expansion. Thus we need multiple expansions sequential. A new expansion is triggered when the local -accuracy is satisfied in the previous expansion. In the following, we first derive the value of the GP-UCB acquisition function when x→∞ (Proposition 4.1), and then use this value to derive analytical expressions for the size of the expansion space S (Theorem 4.1) and when to trigger a new expansion. Proposition 4.1 When x→∞, the GP-UCB acquisition function αUCB(x;Dt−1)→ √ βtθ, where βt is the exploration-exploitation parameter of the GP-UCB acquisition function and θ is the scale factor of the kernel function k(x, x′). Derivation of the expansion search space Our idea is to choose the region S such that S = Rd \ A, where 1) A contains all the points x that are far from all the current observations, and, 2) A := {x ∈ Rd : |αUCB(x;Dt−1)− √ βtθ| < /2}. Here, we will show that with this choice of S, there exists at least one point in S whose acquisition function value is within from the acquisition function maximum, given < | √ βtθ −minx∈Rd(αUCB(x;Dt−1))|. We consider three cases that can happen to the GP-UCB acquisition function (See Figure 1): • Case 1: The argmax of the GP-UCB acquisition function is at infinity. This means that the GP-UCB acquisition function maximum is equal to √ βtθ. As the GP-UCB acquisition function is continuous and < | √ βtθ −minx∈Rd(αUCB(x;Dt−1))|, hence, there exists a point xu such that αUCB(xu) = √ βtθ − /2. By the definition of S, it is straightforward that xu belongs to S , thus proving that there exists a point in S whose GP-UCB acquisition function value is within from the maximum of the acquisition function. • Case 2: The argmax of the GP-UCB acquisition function x′max is at a finite location and its acquisition function value is larger or equal √ βtθ + /2. It is straightforward to see that the argmax x′max belongs to the region S and this is the point that satisfies |αUCB(x′max;Dt−1)−maxx∈Rd αUCB(x;Dt−1)| ≤ . • Case 3: The GP-UCB acquisition function argmax is at a finite location and the acquisition function maximum is smaller than √ βtθ + /2. As the GP-UCB acquisition function is continuous and < | √ βtθ − minx∈Rd(αUCB(x;Dt−1))|, there exists a point xu ∈ S : αUCB(xu;Dt−1) = √ βtθ − /2. As maxx∈Rd αUCB(x;Dt−1) < √ βtθ + /2, it follows directly that |αUCB(xu;Dt−1)−maxx∈Rd αUCB(x;Dt−1)| ≤ . Theorem 4.1 now formally derives an analytical expression for one way to define region S. Algorithm 1 Bayesian optimization with unknown search space (GPUCB-UBO) 1: Input: Gaussian Process (GP)M, acquisition functions αUCB , αLCB , initial observationsDinit, initial search space Suser, function f , positive small threshold , evaluation budget T . 2: Output: Point x : max f(x)− f(x ) ≤ . 3: Initialize D0 = Dinit, S = Suser, β1, tk = 0. Update the GP using D0. 4: for t = 1, 2, . . . , T do 5: Set tlocal = t− tk 6: Compute xm = argmaxx∈S αUCB(x;Dt−1) 7: Set xt = xm, yt = f(xt). Update Dt = Dt−1 ∪ (xt, yt). 8: /∗ Compute the expansion trigger, the regret upper bound ∗/ 9: Compute rb = αUCB(xt;Dt−1)−maxx∈Dt αLCB(x;Dt−1) + 1/t2local 10: /∗ If expansion triggered, expand the search space ∗/ 11: if (rb <= ) | (t == 1) then 12: Compute the new search space S as defined in Theorem 4.1 13: Set tk = tk + tlocal 14: end if 15: /∗ Adjust the βt based on the search space ∗/ 16: Compute βt following Theorem 5.1 17: Update the GP using Dt. 18: end for Theorem 4.1 Consider the GP-UCB acquisition function αUCB(x;Dt−1). Let us define the region S = ⋃|Dt−1| i=1 Si, Si = {x : ‖x − xi‖2 ≤ d }, xi ∈ Dt−1, |Dt−1| is the cardinality of Dt−1, d = gk(min( √ ( √ βtθ /2− 2/16)/(|Dt−1|λmax)/ √ βt, 0.25 /max( ∑ zj≤0−zj , ∑ zj≥0 zj))) with gk(.) as in Eq. (5), λmax be the largest singular value of (K + σ2I)−1, and zj be the jth element of (K + σ2I)−1y. Given < | √ βtθ −minx∈Rd(αUCB(x;Dt−1))|, then there exists at least one point in S whose acquisition function value is within from the acquisition function maximum, i.e. ∃xu ∈ S : |αUCB(xu;Dt−1)−maxx∈Rd αUCB(x;Dt−1)| ≤ . Acquisition function adaption Let us denote Sk as the kth expansion search space (k ≥ 1). In each Sk, the parameter {βt} of the GP-UCB acquisition function needs to be valid to ensure the algorithm achieves the local -accuracy condition. Hence, a new {βt} is adjusted after each expansion. Details on how to compute the new {βt} are in Theorem 5.1. Triggering the next expansion To guarantee the global -accuracy condition, in each search space Sk, we aim to find an iteration Tk which satisfies rSk(Tk) = (maxx∈Sk f(x)−maxxi∈DTk f(xi)) ≤ before the next expansion. As we do not have maxx∈Sk f(x) and {f(xi)}, we bound rSk(t) by rb,Sk(t) = maxx∈Sk αUCB(x;Dt−1)+1/t2−maxx∈Dt αLCB(x;Dt−1), where αLCB(x;Dt−1) = µt−1(x)− √ βtσt−1(x). The next expansion is triggered when rb,Sk(t) reaches . Search space optimization The theoretical search space developed in Theorem 4.1 is the union of |Dt−1| balls. To suit optimizer input, this region is converted to an encompassing hypercube using, minxi∈Dt−1(x k i )− d ≤ xk ≤ maxxi∈Dt−1(xki ) + d , k = 1, d. (6) Further refinement of the implementation is provided in the supplementary material. Algorithm 1 describes the proposed Bayesian optimization with unknown search space algorithm. 5 Theoretical Analysis First, to ensure the validity of our algorithm, we prove that for a wide range of kernels, for any search space Sk and any positive , with a proper choice of {βt}, our trigger for expansion condition occurs with high probability. When this happens, the algorithm achieves the local -accuracy condition. Proposition 5.1 For any d-dimensional domain Sk with side length rk, for the kernel classes: finite dimensional linear, Squared Exponential and Matérn, suppose the kernel k(x, x′) satisfies the following condition on the derivatives of GP sample paths f : ∃ak, bk > 0, Pr{supx∈Sk |∂f/∂xj | > L} ≤ ak exp−(L/bk) 2 , j = 1, d. Pick δ ∈ (0, 1), and define βt = 2 log(t22π2/(3δ)) + 2d log(t2dbkrk √ log(4dak/δ)), then ∀ > 0, with probability larger than 1 − δ, there ∃Tk : ∀t ≥ Tk,maxx∈Sk αUCB(x;Dt−1) − maxx∈Dt αLCB(x;Dt−1) ≤ − 1/t2; and ∀t that satisfies the previous condition, maxx∈Sk f(x)−maxx∈Dt f(x) ≤ . Second, we prove that with a proper choice of {βt} and for a wide range class of kernels, after a finite number of iterations, our algorithm achieves the global -accuracy condition with high probability. Theorem 5.1 Denote {Sk} as the series of the expansion search space suggested by our algorithm (k ≥ 1). In each Sk, let Tk be the smallest number of iterations that satisfies our expansion triggered condition, i.e. rb,Sk(Tk) ≤ . Suppose the kernel k(x, x′) belong to the kernel classes listed in Proposition 5.1 and it satisfies the following condition on the derivatives of GP sample paths f : ∃ak, bk > 0, Pr{supx∈Sk |∂f/∂xj | > L} ≤ ak exp −(L/bk)2 , j = 1, d. Pick δ ∈ (0, 1), and define, βt = 2 log((t− ∑ j≤k−1 Tj) 22π2/(3δ)) + 2d log((t− ∑ j≤k−1 Tj) 2dbkrk √ log(4dak/δ)),∑ j≤k−1 Tj + 1 ≤ t ≤ ∑ j≤k Tj , k = 1, 2, .... Then running the proposed algorithm with the above choice of βt for a sample f of a GP with mean function zero and covariance function k(x, x′), after a finite number of iterations, we achieve global -accuracy with at least 1− δ probability, i.e. Pr{f(xmax)− f(xsuggest) ≤ } ≥ 1− δ, where xsuggest is the algorithm recommendation and xmax is the objective function global argmax. Discussion The difference between our method and previous works is that we guarantee the local -accuracy condition in each search space, eventually achieving the global -accuracy. Previous methods do not give this guarantee, and thus their final solution may not reach global -accuracy. 6 Experimental Evaluation We evaluate our method on five synthetic benchmark functions and three hyperparameter tuning tasks for common machine learning models. For problems with dimension d, the optimization evaluation budget is 10d (excluding initial 3d points following a latin hypercube sampling [8]). The experiments were repeated 30 and 20 times for the synthetic functions and machine learning hyperparameter tuning tasks respectively. For all algorithms, the Squared Exponential kernel is used, the GP models are fitted using the Maximum Likelihood method and the output observations {yi} are normalized yi ∼ N (0, 1). As with previous GP-based algorithms that use confidence bounds [3, 19], our theoretical choice of {βt} in Theorem 5.1 is typically overly conservative. Hence, following the suggestion in [19], for any algorithms that use the GP-UCB acquisition, we scale βt down by a factor of 5. Finally, for the synthetic functions, is set at 0.05 whist for the machine learning models, is set at 0.02 as we require higher accuracy in these cases. We compare our proposed method, GPUCB-UBO, with seven baselines: (1) EI-Vanilla: the vanilla Expected Improvement (EI); (2) EI-Volx2: the EI with the search space volume doubled every 3d iterations [16]; (3) EI-H: the Regularized EI with a hinge-quadratic prior mean where β = 1 and R is the circumradius of the initial search space [16]; (4) EI-Q: the Regularized EI with a quadratic prior mean where the widths w are set to those of the initial search space [16]; (5) GPUCB-Vanilla: the vanilla GP-UCB; (6) GPUCB-Volx2: the GP-UCB with the search space volume doubled every 3d iterations [16]; (7) GPUCB-FBO: the GP-UCB with the fitering expansion strategy in [12]. 6.1 Visualization We visualize our theoretical expansion search spaces derived in Theorem 4.1 on the Beale test function (Figure 2). We show the contour plots of the GP-UCB acquisition functions, and show both the observations (red stars) and the recommendation from the algorithm that correspond the acquisition function maximum (cyan stars). The initial user-defined search space (black rectangle) is expanded as per theoretical search spaces developed in Theorem 4.1 (yellow rectangles). Here we use Eq. (6) to plot the expansion search spaces, however, the spaces developed in Theorem 4.1 are tighter. The figure illustrates that when the argmax of the objective function is outside of the user-defined search space, with our search space expansion strategy, this argmax can be located within a finite number of expansions. 6.2 Synthetic Benchmarks We compare our method with seven baselines on five benchmark test functions: Beale, Eggholder, Levy 3, Hartman 3 and Hartman 6. We use the same experiment setup as in [16]. The length of the initial user-defined search space is set to be 20% of the length of the function domain - e.g. if the function domain is the unit hypercube [0, 1]d, then the initial search space has side length of 0.2. The center of this initial search space is placed randomly in the domain of the objective function. For each test function and algorithm, we run the experiment 30 times, and each time the initial search space will be placed differently. We plot the mean and the standard error of the best found values maxi=1,n f(xi) of each test function. Figure 3 shows that for most test functions, our method GPUCB-UBO achieves both better function values and in less iterations than other methods. For most test functions, our method is better than other six state-of-the-art approaches (except GPUCB-FBO) by a high margin. Compared with GPUCB-FBO, our method is better on the test functions Hartman3 and Hartman6 while performing similar on other three test functions. Note that the computation time of GPUCB-FBO is 2-3 times slower than our method and other approaches (see Table 1) because it needs an extra step to numerically solve several optimization problems to construct the new search space. Since we derive the expansion search spaces analytically, our method, in contrast, can optimize the acquisition function within these spaces without any additional computation. 6.3 Hyperparameter Tuning for Machine Learning Models Next we apply our method on hyperparameter tuning of three machine learning models on the MNIST dataset: elastic net, multilayer perceptron and convolutional neural network. With each model, the experiments are repeated 20 times and each time the initial search space will be placed differently. Elastic Net Elastic net is a regularized regression method that utilizes the L1 and L2 regularizers. In the model, the hyperparameter α > 0 determines the magnitude of the penalty and the hyperparameter l (0 ≤ l ≤ 1) balances between the L1 and L2 regularizers. We tune α in the normal space while l is tuned in an exponent space (base 10). The initial search space of α and l is randomly placed in the domain [−3,−1]× [0, 1] with side length to be 20% of the domain size length. We implement the Elastic net model using the function SGDClassifier in the scikit-learn package [13]. Multilayer Perceptron (MLP) We construct a 2-layer MLP with 512 neurons/layer. We optimize three hypeparameters: the learning rate l and the L2 norm regularization hyperparameters lr1 and lr2 of the two layers. All the hyperparameters are tuned in the exponent space (base 10). The initial search space is a randomly placed unit cube in the cube [−6,−1]3. The model is implemented using tensorflow. The model is trained with the Adam optimizer in 20 epochs and the batch size is 128. Convolutional Neural Network (CNN) We consider a CNN with two convolutional layers. The CNN architecture (e.g. the number of filters, the filter shape, etc.) is chosen as the standard architecture published on the official GitHub repository of tensorflow 1. We optimize three hyperparameters: the learning rate l and the dropout rates rd1, rd2 in the pooling layers 1 and 2. We tune rd1, rd2 in the normal space while l is tuned in an exponent space (base 10). The initial search space of rd1, rd2, l is randomly placed in the domain [0, 1]× [0, 1]× [−5,−1] with side length to be 20% of this domain size length. The network is trained with the Adam optimizer in 20 epochs and the batch size is 128. Given a set of hyperparameters, we train the models with this hyperparameter setting using the MNIST train dataset (55000 patterns) and then test the model on the MNIST test dataset (10000 patterns). Bayesian optimization method then suggests a new hyperparameter setting based on the 1https://github.com/tensorflow/tensorflow prediction accuracy on the test dataset. This process is conducted iteratively until the evaluation budget (10d evaluations) is depleted. We plot the prediction accuracy in Figure 4. For the Elastic net model, our method GPUCB-UBO performs similar to GPUCB-FBO while outperforming the other six approaches significantly. For the MLP model, GPUCB-UBO performs far better than other approaches. To be specific, after only 12 iterations, it achieves a prediction accuracy of 97.8% whilst other approaches take more than 24 iterations to get to this level. For the CNN model, GPUCB-UBO also outperforms other approaches by a high margin. After 30 iterations, it can provide a CNN model with prediction accuracy of 98.7%. 7 Conclusion We propose a novel Bayesian optimization framework when the search space is unknown. We guarantee that in iterative expansions of the search space, our method can find a point whose function value within of the objective function maximum. Without the need to specify any parameters, our algorithm automatically triggers a minimal expansion required iteratively. We demonstrate our method on both synthetic benchmark functions and machine learning hyper-parameter tuning tasks and demonstrate that our method outperforms state-of-the-art approaches. Our source code is publicly available at https://github.com/HuongHa12/BO_unknown_searchspace. Acknowledgments This research was partially funded by the Australian Government through the Australian Research Council (ARC). Prof Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006).
1. How does the approach deal with the situation where there is no prior knowledge to guarantee that the specified search space contains the global optimum? 2. Can you explain how the upper confidence bound (UCB) acquisition function works in expanding the search space? 3. How does the algorithm ensure that it returns a solution achieving ε-accuracy? 4. In the proof of Theorem 4.1, what happens if ∥√(βt)θ−minx∈Rd{aUCBL(x;Dt−1)}∥=0? 5. How is the value of γ determined in equation (14)? 6. Why do the second and third inequalities hold in equation (15)? 7. How can equation (15) hold with probability at least 1−δ? 8. What are the values of a_k and b_k in β_t in the experimental settings? 9. Is there a mistake in the posterior mean in equation (1)? Should it be "y-m" instead of "y"? 10. Why was Proposition 5.1 presented as a lemma in the supplementary material instead of being included in the main paper?
Review
Review Applying Bayesian optimization to expensive black-box problems needs to specify the bound of search space. However, when tackling a completely new problem, there is no prior knowledge to guarantee that the specified search space contains the global optimum. The paper proposes an approach to deal with this situation. In the approach, the user first specifies an initial search space; then the bound of search space automatically expands as the iteration proceeds; finally the algorithm will return a solution achieving \epsilon-accuracy. The key is how to expand the search space. The approach uses upper confidence bound (UCB) as acquisition function, and selects the expanded search space containing at least one point whose acquisition function value is within \epsilon from the acquisition function maximum. Also because the gap between the best selected point and the global optimum of the current search space can be bounded, the approach can return a solution achieving \epsilon-accuracy. I have checked all the proofs, and believe most of them are correct. However, I also found some places which are unclear and may be incorrect. Questions: 1. In the proof of Theorem 4.1, \epsilon is set less than \| \sqrt(\beta_t)\theta -\min_{x\in R^d}a_{UCB}(x;D_{t-1})\|. What if \| \sqrt(\beta_t)\theta -\min_{x\in R^d}a_{UCB}(x;D_{t-1})\|=0? 2. \gamma is the minimum value among a set of numbers above eq(14), but is a specific value in eq(14). How do you determine its value in eq(14)? 3. Why do the second and the third inequalities hold in eq(15)? 4. How can eq(15) hold with probability at least 1-\delta? max_{x\in D_t}a_{LCB} is used to replace max_{x\in D_t}f(x) in the second inequality. Did you consider the probability for max_{x\in D_t}a_{LCB}<= max_{x\in D_t}f(x) to hold? Question about experiments: 1. a_k and b_k in \beta_t are unknown. What are their values in your experimental settings? Minor comments: 1. The posterior mean in eq(1) is not correct. “y” -> “y-m”? 2. Proposition 5.1 is proposed in the paper, but it is presented as a lemma in the supplementary.
NIPS
Title Bayesian Optimization with Unknown Search Space Abstract Applying Bayesian optimization in problems wherein the search space is unknown is challenging. To address this problem, we propose a systematic volume expansion strategy for the Bayesian optimization. We devise a strategy to guarantee that in iterative expansions of the search space, our method can find a point whose function value within of the objective function maximum. Without the need to specify any parameters, our algorithm automatically triggers a minimal expansion required iteratively. We derive analytic expressions for when to trigger the expansion and by how much to expand. We also provide theoretical analysis to show that our method achieves -accuracy after a finite number of iterations. We demonstrate our method on both benchmark test functions and machine learning hyper-parameter tuning tasks and demonstrate that our method outperforms baselines. 1 Introduction Choosing where to search matters. A time-tested path in the quest for new products or processes is through experimental optimization. Bayesian optimization offers a sample efficient strategy for experimental design by optimizing expensive black-box functions [9–11]. But one problem is that users need to specify a bounded region to restrict the search of the objective function extrema. When tackling a completely new problem, users do not have prior knowledge, hence there is no guarantee that an arbitrarily defined search space contains the global optimum. Thus application of the Bayesian optimization framework when the search region is unknown remains an open challenge [16]. One approach is to use a regularized acquisition function such that its maximum can never be at infinity - hence no search space needs to be declared and an unconstrained optimizer can be used [16]. Other approaches use volume expansion, i.e. starting from the user-defined region, the search space is expanded during the optimization. The simplest strategy is to repeatedly double the volume of the search space every several iterations [16]. Nguyen et al suggest a volume expansion strategy based on the evaluation budget [12]. All these methods require users to specify critical parameters - as example, regularization parameters [16], or growth rate, expansion frequency (volume doubling) [16] or budget [12]. These parameters are difficult to specify in practice. Additionally, [12] is computationally expensive and the user-defined search space needs to be close to the global optimum. In this paper, we propose a systematic volume expansion strategy for the Bayesian optimization framework wherein the search space is unknown. Without any prior knowledge about the objective function argmax or strict assumptions on the behavior of the objective function, it is impossible to guarantee the global convergence when the search space is continuously expanded. To circumvent this problem, we consider the setting where we achieve the global -accuracy condition, that is, we aim to find a point whose function value is within of the objective function global maximum. Our volume expansion strategy is based on two guiding principles: 1) The algorithm can reach a point whose function value is within of the objective function maximum in one expansion, and, 2) the search space should be minimally expanded so that the algorithm does not spend unnecessary 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. evaluations near the search space boundary. As the objective function is unknown, it is not possible to compute this ideal expansion region. Using the GP-UCB acquisition function as a surrogate, this region is computed as one that contains at least one point whose acquisition function value is within of the acquisition function maximum. However, by using a surrogate to approximate the objective function, there is no guarantee that we can achieve the global -accuracy within one expansion. Hence multiple expansions are required, and a new expansion is triggered when the local -accuracy is satisfied, i.e. when the algorithm can find a point whose function value is within of the objective function maximum in the current search space. Analytical expressions for the size of the new expansion space and when to trigger the expansion are derived. The guarantees for the -accuracy condition, however, now lapses in the expanded region, and so we adjust the acquisition function appropriately to maintain the guarantee. Finally, we provide theoretical analysis to show that our proposed method achieves the global -accuracy condition after a finite number of iterations. We demonstrate our algorithm on five synthetic benchmark functions and three real hyperparameter tuning tasks for common machine learning models: linear regression with elastic net, multilayer perceptron and convolutional neural network. Our experimental results show that our method achieves better function values with fewer samples compared to state-of-the-art approaches. In summary, our contributions are: • Formalising the analysis for Bayesian optimization framework in an unknown search space setting, and introducing -accuracy as a way to track the algorithmic performance; • Providing analytic expressions for how far to expand the search space and when to expand the search space to achieve global -accuracy; • Deriving theoretical global -accuracy convergence; and, • Demonstrating our algorithm on both synthetic and real-world problems and comparing it against state-of-the-art methods. Our method differs from previous works in that 1) our method does not require any algorithmic parameters, automatically adjusting both when to trigger the expansion and by how much to expand, and, 2) our approach is the only one to guarantee the global -accuracy condition. This is because we guarantee the local -accuracy condition in each search space, thus eventually the global - accuracy is achieved. Without this local guarantee, the suggested solution cannot be guaranteed to reach global -accuracy. The regularization [16] and the filtering method [12] require the global optimum to be within a bound constructed by either the user specified regularizer or the budget. The volume doubling method [16] can continue to expand the search space to infinity, however, the local -accuracy condition is not guaranteed in each search space. The paper is organized as follows. Section 2 gives an overview of Bayesian optimization and discusses some of the related work. Section 3 describes the problem setup. Section 4 proposes our new expansion strategy for the Bayesian optimization framework when the search space is unknown. A theoretical analysis for our proposed method is presented in Section 5. In Section 6, we demonstrate the effectiveness of our algorithm by numerical experiments. Finally, Section 7 concludes the paper. 2 Background and Related Work 2.1 Background Bayesian optimization is a powerful optimization method to find the global optimum of an unknown objective function f(x) by sequential queries [9–11, 17, 18]. First, at time t, a surrogate model is used to approximate the behaviour of f(x) using all the current observed data Dt−1 = {(xi, yi)}ni=1, yi = f(xi) + ξi, where ξi ∼ N (0, σ2) is the noise. Second, an acquisition function is constructed from the surrogate model that suggests the next point xitr to be evaluated. The objective function is then evaluated at xitr and the new data point (xitr, yitr) is added to Dt−1. These steps are conducted in an iterative manner to get the best estimate of the global optimum. The most common choice for the surrogate model used in Bayesian optimization is the Gaussian Process (GP) [14]. Assume the function f follows a GP with mean function m0(x) and covariance function k(x, x′), the posterior distribution of f given the observed data Dt−1 = {(xi, yi)}ni=1 is a GP with the following posterior mean and variance, µt−1(x) = m0(x) + k|Dt−1|(x) T (K|Dt−1| + σ 2I|Dt−1|) −1y|Dt−1|, σ2t−1(x) = k(x, x)− k|Dt−1|(x) T (K|Dt−1| + σ 2I|Dt−1|) −1k|Dt−1|(x), (1) where y|Dt−1| = [y1, . . . , y|Dt−1|] T , k|Dt−1|(x) = [k(x, xi)] |Dt−1| i=1 , K|Dt−1| = [k(xi, xj)]i,j , I|Dt−1| is the |Dt−1| × |Dt−1| identity matrix and |Dt−1| denotes the cardinality of Dt−1. To aid readability, in the sequel we remove the notation that shows the dependence of k,K, I, y on |Dt−1|. There are many existing acquisition functions [6, 7, 10, 11, 20] and in this paper, we focus only on the GP-UCB acquisition function [1, 2, 5, 19]. The GP-UCB acquisition function is defined as, αUCB(x;Dt−1) = µt−1(x) + √ βtσt−1(x), (2) where µt−1(x), σt−1(x) are the posterior mean and standard deviation of the GP given observed data Dt−1 and βt ≥ 0 is an appropriate parameter that balances the exploration and exploitation. Given a search domain, {βt} can be chosen as in [19] to ensure global convergence in this domain. 2.2 Related Work All the work related to the problem of Bayesian optimization with unknown search space have been described in Section 1. There is the work in [3] introduces the term -accuracy. However, their purpose is to unify the Bayesian optimization and the Level-set estimation framework. 3 Problem Setup We wish to find the global argmax xmax of an unknown objective function f : Rd 7→ R, whose argmax is at a finite location, i.e. xmax = argmaxx∈S∗ f(x), (3) where S∗ is a finite region that contains the argmax of the function f(x). In practice, the region S∗ is not known in advance, so users need to identify a search domain Suser which is likely to contain the argmax of f(x). This search domain can be set arbitrarily or based on limited prior knowledge. Thus there is no guarantee that Suser contains the global optimum of the objective function. In the trivial cases when the search space S∗ is known or when S∗ ⊂ Suser, the global convergence can be guaranteed through classical analysis [4, 19]. Here, we consider the general case when S∗ may or may not be a subset of Suser. Without any prior knowledge about S∗ or strict assumptions on the behavior of the objective function, it is impossible to guarantee the global convergence. Therefore, in this work, instead of solving Eq. (3), we consider the setting where we achieve the global -accuracy condition. That is, for a small positive value , we find a solution x which satisfies, f(xmax)− f(x ) ≤ . (4) 4 Proposed Approach We make some mild assumptions to develop our main results. Assumption 4.1 The prior mean function m0(x) = 0. This is done by subtracting the mean from all observations and is common practice. Assumption 4.2 The kernel k(x, x′) satisfies, (1) when ‖x − x′‖2 → +∞, k(x, x′) → 0; (2) k(x, x′) ≤ 1 ∀(x, x′) ; (3) k(x, x) = θ2, where θ ≥ 0 is the scale factor of the kernel function. Various kernels satisfy Assumption 4.2, e.g. the Matérn kernel, the Square Exponential kernel. As the function can always be re-scaled, condition 2 is met without loss of generality [15, 19]. Defining gk(γ): With these types of kernels, for all small positive γ, there always exists gk(γ) > 0, ∀x, x′ : ‖x− x′‖2 ≥ gk(γ), k(x, x′) ≤ γ. (5) The value of gk(γ) can be computed from γ and the kernel covariance function k(x, x′) i.e. for Squared Exponential kernel kSE(x, x′) = θ2exp(−‖x− x′‖22/(2l2)), gk(γ) will be √ 2l2log(θ2/γ). Assumption 4.3 The kernel k(x, x′) is known in advance or can be learned from the observations. 4.1 Proposed Expansion Strategy The ideal expansion strategy should satisfy two characteristics: 1) The algorithm can reach the global -accuracy condition in one expansion, and, 2) the search space should be minimally expanded so that the algorithm does not spend unnecessary evaluations near the search space boundary. Since we have a black-box objective function, it is not possible to compute the ideal expansion space Sideal directly. Let the exploration-exploitation parameters {βt} be chosen to ensure the objective function is upper bounded by the GP-UCB acquisition function with high probability. Then we can estimate Sideal by a region S as a minimal region that contains at least one point whose acquisition function value is within from the acquisition function maximum, i.e. ∃xu ∈ S : |αUCB(xu;Dt−1)−maxx∈Rd αUCB(x;Dt−1)| ≤ . Due to the approximation, there is no guarantee we can achieve the global -accuracy in one expansion. Thus we need multiple expansions sequential. A new expansion is triggered when the local -accuracy is satisfied in the previous expansion. In the following, we first derive the value of the GP-UCB acquisition function when x→∞ (Proposition 4.1), and then use this value to derive analytical expressions for the size of the expansion space S (Theorem 4.1) and when to trigger a new expansion. Proposition 4.1 When x→∞, the GP-UCB acquisition function αUCB(x;Dt−1)→ √ βtθ, where βt is the exploration-exploitation parameter of the GP-UCB acquisition function and θ is the scale factor of the kernel function k(x, x′). Derivation of the expansion search space Our idea is to choose the region S such that S = Rd \ A, where 1) A contains all the points x that are far from all the current observations, and, 2) A := {x ∈ Rd : |αUCB(x;Dt−1)− √ βtθ| < /2}. Here, we will show that with this choice of S, there exists at least one point in S whose acquisition function value is within from the acquisition function maximum, given < | √ βtθ −minx∈Rd(αUCB(x;Dt−1))|. We consider three cases that can happen to the GP-UCB acquisition function (See Figure 1): • Case 1: The argmax of the GP-UCB acquisition function is at infinity. This means that the GP-UCB acquisition function maximum is equal to √ βtθ. As the GP-UCB acquisition function is continuous and < | √ βtθ −minx∈Rd(αUCB(x;Dt−1))|, hence, there exists a point xu such that αUCB(xu) = √ βtθ − /2. By the definition of S, it is straightforward that xu belongs to S , thus proving that there exists a point in S whose GP-UCB acquisition function value is within from the maximum of the acquisition function. • Case 2: The argmax of the GP-UCB acquisition function x′max is at a finite location and its acquisition function value is larger or equal √ βtθ + /2. It is straightforward to see that the argmax x′max belongs to the region S and this is the point that satisfies |αUCB(x′max;Dt−1)−maxx∈Rd αUCB(x;Dt−1)| ≤ . • Case 3: The GP-UCB acquisition function argmax is at a finite location and the acquisition function maximum is smaller than √ βtθ + /2. As the GP-UCB acquisition function is continuous and < | √ βtθ − minx∈Rd(αUCB(x;Dt−1))|, there exists a point xu ∈ S : αUCB(xu;Dt−1) = √ βtθ − /2. As maxx∈Rd αUCB(x;Dt−1) < √ βtθ + /2, it follows directly that |αUCB(xu;Dt−1)−maxx∈Rd αUCB(x;Dt−1)| ≤ . Theorem 4.1 now formally derives an analytical expression for one way to define region S. Algorithm 1 Bayesian optimization with unknown search space (GPUCB-UBO) 1: Input: Gaussian Process (GP)M, acquisition functions αUCB , αLCB , initial observationsDinit, initial search space Suser, function f , positive small threshold , evaluation budget T . 2: Output: Point x : max f(x)− f(x ) ≤ . 3: Initialize D0 = Dinit, S = Suser, β1, tk = 0. Update the GP using D0. 4: for t = 1, 2, . . . , T do 5: Set tlocal = t− tk 6: Compute xm = argmaxx∈S αUCB(x;Dt−1) 7: Set xt = xm, yt = f(xt). Update Dt = Dt−1 ∪ (xt, yt). 8: /∗ Compute the expansion trigger, the regret upper bound ∗/ 9: Compute rb = αUCB(xt;Dt−1)−maxx∈Dt αLCB(x;Dt−1) + 1/t2local 10: /∗ If expansion triggered, expand the search space ∗/ 11: if (rb <= ) | (t == 1) then 12: Compute the new search space S as defined in Theorem 4.1 13: Set tk = tk + tlocal 14: end if 15: /∗ Adjust the βt based on the search space ∗/ 16: Compute βt following Theorem 5.1 17: Update the GP using Dt. 18: end for Theorem 4.1 Consider the GP-UCB acquisition function αUCB(x;Dt−1). Let us define the region S = ⋃|Dt−1| i=1 Si, Si = {x : ‖x − xi‖2 ≤ d }, xi ∈ Dt−1, |Dt−1| is the cardinality of Dt−1, d = gk(min( √ ( √ βtθ /2− 2/16)/(|Dt−1|λmax)/ √ βt, 0.25 /max( ∑ zj≤0−zj , ∑ zj≥0 zj))) with gk(.) as in Eq. (5), λmax be the largest singular value of (K + σ2I)−1, and zj be the jth element of (K + σ2I)−1y. Given < | √ βtθ −minx∈Rd(αUCB(x;Dt−1))|, then there exists at least one point in S whose acquisition function value is within from the acquisition function maximum, i.e. ∃xu ∈ S : |αUCB(xu;Dt−1)−maxx∈Rd αUCB(x;Dt−1)| ≤ . Acquisition function adaption Let us denote Sk as the kth expansion search space (k ≥ 1). In each Sk, the parameter {βt} of the GP-UCB acquisition function needs to be valid to ensure the algorithm achieves the local -accuracy condition. Hence, a new {βt} is adjusted after each expansion. Details on how to compute the new {βt} are in Theorem 5.1. Triggering the next expansion To guarantee the global -accuracy condition, in each search space Sk, we aim to find an iteration Tk which satisfies rSk(Tk) = (maxx∈Sk f(x)−maxxi∈DTk f(xi)) ≤ before the next expansion. As we do not have maxx∈Sk f(x) and {f(xi)}, we bound rSk(t) by rb,Sk(t) = maxx∈Sk αUCB(x;Dt−1)+1/t2−maxx∈Dt αLCB(x;Dt−1), where αLCB(x;Dt−1) = µt−1(x)− √ βtσt−1(x). The next expansion is triggered when rb,Sk(t) reaches . Search space optimization The theoretical search space developed in Theorem 4.1 is the union of |Dt−1| balls. To suit optimizer input, this region is converted to an encompassing hypercube using, minxi∈Dt−1(x k i )− d ≤ xk ≤ maxxi∈Dt−1(xki ) + d , k = 1, d. (6) Further refinement of the implementation is provided in the supplementary material. Algorithm 1 describes the proposed Bayesian optimization with unknown search space algorithm. 5 Theoretical Analysis First, to ensure the validity of our algorithm, we prove that for a wide range of kernels, for any search space Sk and any positive , with a proper choice of {βt}, our trigger for expansion condition occurs with high probability. When this happens, the algorithm achieves the local -accuracy condition. Proposition 5.1 For any d-dimensional domain Sk with side length rk, for the kernel classes: finite dimensional linear, Squared Exponential and Matérn, suppose the kernel k(x, x′) satisfies the following condition on the derivatives of GP sample paths f : ∃ak, bk > 0, Pr{supx∈Sk |∂f/∂xj | > L} ≤ ak exp−(L/bk) 2 , j = 1, d. Pick δ ∈ (0, 1), and define βt = 2 log(t22π2/(3δ)) + 2d log(t2dbkrk √ log(4dak/δ)), then ∀ > 0, with probability larger than 1 − δ, there ∃Tk : ∀t ≥ Tk,maxx∈Sk αUCB(x;Dt−1) − maxx∈Dt αLCB(x;Dt−1) ≤ − 1/t2; and ∀t that satisfies the previous condition, maxx∈Sk f(x)−maxx∈Dt f(x) ≤ . Second, we prove that with a proper choice of {βt} and for a wide range class of kernels, after a finite number of iterations, our algorithm achieves the global -accuracy condition with high probability. Theorem 5.1 Denote {Sk} as the series of the expansion search space suggested by our algorithm (k ≥ 1). In each Sk, let Tk be the smallest number of iterations that satisfies our expansion triggered condition, i.e. rb,Sk(Tk) ≤ . Suppose the kernel k(x, x′) belong to the kernel classes listed in Proposition 5.1 and it satisfies the following condition on the derivatives of GP sample paths f : ∃ak, bk > 0, Pr{supx∈Sk |∂f/∂xj | > L} ≤ ak exp −(L/bk)2 , j = 1, d. Pick δ ∈ (0, 1), and define, βt = 2 log((t− ∑ j≤k−1 Tj) 22π2/(3δ)) + 2d log((t− ∑ j≤k−1 Tj) 2dbkrk √ log(4dak/δ)),∑ j≤k−1 Tj + 1 ≤ t ≤ ∑ j≤k Tj , k = 1, 2, .... Then running the proposed algorithm with the above choice of βt for a sample f of a GP with mean function zero and covariance function k(x, x′), after a finite number of iterations, we achieve global -accuracy with at least 1− δ probability, i.e. Pr{f(xmax)− f(xsuggest) ≤ } ≥ 1− δ, where xsuggest is the algorithm recommendation and xmax is the objective function global argmax. Discussion The difference between our method and previous works is that we guarantee the local -accuracy condition in each search space, eventually achieving the global -accuracy. Previous methods do not give this guarantee, and thus their final solution may not reach global -accuracy. 6 Experimental Evaluation We evaluate our method on five synthetic benchmark functions and three hyperparameter tuning tasks for common machine learning models. For problems with dimension d, the optimization evaluation budget is 10d (excluding initial 3d points following a latin hypercube sampling [8]). The experiments were repeated 30 and 20 times for the synthetic functions and machine learning hyperparameter tuning tasks respectively. For all algorithms, the Squared Exponential kernel is used, the GP models are fitted using the Maximum Likelihood method and the output observations {yi} are normalized yi ∼ N (0, 1). As with previous GP-based algorithms that use confidence bounds [3, 19], our theoretical choice of {βt} in Theorem 5.1 is typically overly conservative. Hence, following the suggestion in [19], for any algorithms that use the GP-UCB acquisition, we scale βt down by a factor of 5. Finally, for the synthetic functions, is set at 0.05 whist for the machine learning models, is set at 0.02 as we require higher accuracy in these cases. We compare our proposed method, GPUCB-UBO, with seven baselines: (1) EI-Vanilla: the vanilla Expected Improvement (EI); (2) EI-Volx2: the EI with the search space volume doubled every 3d iterations [16]; (3) EI-H: the Regularized EI with a hinge-quadratic prior mean where β = 1 and R is the circumradius of the initial search space [16]; (4) EI-Q: the Regularized EI with a quadratic prior mean where the widths w are set to those of the initial search space [16]; (5) GPUCB-Vanilla: the vanilla GP-UCB; (6) GPUCB-Volx2: the GP-UCB with the search space volume doubled every 3d iterations [16]; (7) GPUCB-FBO: the GP-UCB with the fitering expansion strategy in [12]. 6.1 Visualization We visualize our theoretical expansion search spaces derived in Theorem 4.1 on the Beale test function (Figure 2). We show the contour plots of the GP-UCB acquisition functions, and show both the observations (red stars) and the recommendation from the algorithm that correspond the acquisition function maximum (cyan stars). The initial user-defined search space (black rectangle) is expanded as per theoretical search spaces developed in Theorem 4.1 (yellow rectangles). Here we use Eq. (6) to plot the expansion search spaces, however, the spaces developed in Theorem 4.1 are tighter. The figure illustrates that when the argmax of the objective function is outside of the user-defined search space, with our search space expansion strategy, this argmax can be located within a finite number of expansions. 6.2 Synthetic Benchmarks We compare our method with seven baselines on five benchmark test functions: Beale, Eggholder, Levy 3, Hartman 3 and Hartman 6. We use the same experiment setup as in [16]. The length of the initial user-defined search space is set to be 20% of the length of the function domain - e.g. if the function domain is the unit hypercube [0, 1]d, then the initial search space has side length of 0.2. The center of this initial search space is placed randomly in the domain of the objective function. For each test function and algorithm, we run the experiment 30 times, and each time the initial search space will be placed differently. We plot the mean and the standard error of the best found values maxi=1,n f(xi) of each test function. Figure 3 shows that for most test functions, our method GPUCB-UBO achieves both better function values and in less iterations than other methods. For most test functions, our method is better than other six state-of-the-art approaches (except GPUCB-FBO) by a high margin. Compared with GPUCB-FBO, our method is better on the test functions Hartman3 and Hartman6 while performing similar on other three test functions. Note that the computation time of GPUCB-FBO is 2-3 times slower than our method and other approaches (see Table 1) because it needs an extra step to numerically solve several optimization problems to construct the new search space. Since we derive the expansion search spaces analytically, our method, in contrast, can optimize the acquisition function within these spaces without any additional computation. 6.3 Hyperparameter Tuning for Machine Learning Models Next we apply our method on hyperparameter tuning of three machine learning models on the MNIST dataset: elastic net, multilayer perceptron and convolutional neural network. With each model, the experiments are repeated 20 times and each time the initial search space will be placed differently. Elastic Net Elastic net is a regularized regression method that utilizes the L1 and L2 regularizers. In the model, the hyperparameter α > 0 determines the magnitude of the penalty and the hyperparameter l (0 ≤ l ≤ 1) balances between the L1 and L2 regularizers. We tune α in the normal space while l is tuned in an exponent space (base 10). The initial search space of α and l is randomly placed in the domain [−3,−1]× [0, 1] with side length to be 20% of the domain size length. We implement the Elastic net model using the function SGDClassifier in the scikit-learn package [13]. Multilayer Perceptron (MLP) We construct a 2-layer MLP with 512 neurons/layer. We optimize three hypeparameters: the learning rate l and the L2 norm regularization hyperparameters lr1 and lr2 of the two layers. All the hyperparameters are tuned in the exponent space (base 10). The initial search space is a randomly placed unit cube in the cube [−6,−1]3. The model is implemented using tensorflow. The model is trained with the Adam optimizer in 20 epochs and the batch size is 128. Convolutional Neural Network (CNN) We consider a CNN with two convolutional layers. The CNN architecture (e.g. the number of filters, the filter shape, etc.) is chosen as the standard architecture published on the official GitHub repository of tensorflow 1. We optimize three hyperparameters: the learning rate l and the dropout rates rd1, rd2 in the pooling layers 1 and 2. We tune rd1, rd2 in the normal space while l is tuned in an exponent space (base 10). The initial search space of rd1, rd2, l is randomly placed in the domain [0, 1]× [0, 1]× [−5,−1] with side length to be 20% of this domain size length. The network is trained with the Adam optimizer in 20 epochs and the batch size is 128. Given a set of hyperparameters, we train the models with this hyperparameter setting using the MNIST train dataset (55000 patterns) and then test the model on the MNIST test dataset (10000 patterns). Bayesian optimization method then suggests a new hyperparameter setting based on the 1https://github.com/tensorflow/tensorflow prediction accuracy on the test dataset. This process is conducted iteratively until the evaluation budget (10d evaluations) is depleted. We plot the prediction accuracy in Figure 4. For the Elastic net model, our method GPUCB-UBO performs similar to GPUCB-FBO while outperforming the other six approaches significantly. For the MLP model, GPUCB-UBO performs far better than other approaches. To be specific, after only 12 iterations, it achieves a prediction accuracy of 97.8% whilst other approaches take more than 24 iterations to get to this level. For the CNN model, GPUCB-UBO also outperforms other approaches by a high margin. After 30 iterations, it can provide a CNN model with prediction accuracy of 98.7%. 7 Conclusion We propose a novel Bayesian optimization framework when the search space is unknown. We guarantee that in iterative expansions of the search space, our method can find a point whose function value within of the objective function maximum. Without the need to specify any parameters, our algorithm automatically triggers a minimal expansion required iteratively. We demonstrate our method on both synthetic benchmark functions and machine learning hyper-parameter tuning tasks and demonstrate that our method outperforms state-of-the-art approaches. Our source code is publicly available at https://github.com/HuongHa12/BO_unknown_searchspace. Acknowledgments This research was partially funded by the Australian Government through the Australian Research Council (ARC). Prof Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006).
1. What is the main contribution of the paper regarding Bayesian Optimization? 2. What are the strengths and weaknesses of the proposed algorithm compared to other approaches in the literature? 3. How does the reviewer assess the clarity, organization, and completeness of the paper's content? 4. Are the theoretical claims and empirical results well-supported and adequately presented? 5. Does the paper provide unique insights or advance the state of the art in any way? 6. What are the reviewer's arguments for and against accepting the paper, considering its quality, originality, significance, and practicality?
Review
Review This paper proposes an algorithm to expand the search space for Bayesian Optimization in order to find an optima giving us some guarantees. The paper is complete in the sense that it provides theoretical claims to support their evidence, a proposition of an algorithm and some experiments Although this is nice is another extension of UCB (they are a lot since it is nice to play with it from the theoretical point of view) and this task of expanding the search space has been treated in a lot of NIPS papers this year, with other strategies that I believe that are more practical and general. Related to: The UCB acquisition function, \epsilon accuracy. Strengths: Provides theoretical and empirical content. Weaknesses: Not the most practical approach to perform this task. It is a combination of well known techniques. Some clarity and organization. Does this submission add value to the NIPS community? : Although other approaches perform similar tasks this is another view that might be taken into account. Perhaps it is not the most practical one although. Quality: Is this submission technically sound?: Not a lot, but what is exposes is sound in the sense that I have not read the \epsilon accuracy in BO before. Are claims well supported by theoretical analysis or experimental results?: Yes they are, it is a strength of the paper. Is this a complete piece of work or work in progress?: I would say that it is a complete piece of work. Are the authors careful and honest about evaluating both the strengths and weaknesses of their work?: Yes, I believe. Clarity: Is the submission clearly written?: I have some suggestions that may be corrected in my humble opinion. Is it well organized?: Yes it is. Does it adequately inform the reader?: I believe it to be so. Originality: Are the tasks or methods new?: They already exist in their literature. Is the work a novel combination of well-known techniques?: Combination of well-known techniques. Is it clear how this work differs from previous contributions?: Yes, but not so clear to add real value in practice. Is related work adequately cited?: Yes it is with one exception. Significance: Are the results important?: I would not bet that this approach is going to be used massively in practice. Are others likely to use the ideas or build on them?: I do not know, maybe, but I like other expansion approaches proposed to NIPS this year more. Does the submission address a difficult task in a better way than previous work?: Previous maybe, current no. Does it advance the state of the art in a demonstrable way?: Yes, they provide experiments and theoretical content. Does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach?: Yes, their work is genuine. Arguments for acceptance: It is a complete paper with empirical and theoretical content. Arguments against acceptance: Other space expansions approaches proposed to NIPS this year are more general and practical, maybe there is no room in NIPS for all of them. Being NIPS the best ML conference, the quality of the paper may be borderline. If it were other conference I would recommend this paper for acceptance blindly. Typos: -> I would not put \epsilon accuracy in the abstract, it it too technical. -> Phrase 3 in the abstract is not syntactically correct "our method can find a point whose function value within \epsilon of the objective..." -> Second phrase of introduction is also not syntactically correct. -> I miss a formal definition of the problem in the intro. -> Global \epsilon accuracy must be explained, introduced formally or cited. -> Sections of the paper must be introduced at the end of section 1. -> Problem setup assumes maximization. More detailed comments and suggestions: I would like to congratulate the authors for the paper and recommend it for acceptance with a weak accept (6) because I think that it is a good paper but NIPS has very high standards and it is a pity to say that the other papers that I have reviewed that achieve this same task are more practical, more pragmatic. I would use them in practice and I am afraid that this one not. This is the reason why I qualify this paper with a 6.
NIPS
Title Bayesian Optimization with Unknown Search Space Abstract Applying Bayesian optimization in problems wherein the search space is unknown is challenging. To address this problem, we propose a systematic volume expansion strategy for the Bayesian optimization. We devise a strategy to guarantee that in iterative expansions of the search space, our method can find a point whose function value within of the objective function maximum. Without the need to specify any parameters, our algorithm automatically triggers a minimal expansion required iteratively. We derive analytic expressions for when to trigger the expansion and by how much to expand. We also provide theoretical analysis to show that our method achieves -accuracy after a finite number of iterations. We demonstrate our method on both benchmark test functions and machine learning hyper-parameter tuning tasks and demonstrate that our method outperforms baselines. 1 Introduction Choosing where to search matters. A time-tested path in the quest for new products or processes is through experimental optimization. Bayesian optimization offers a sample efficient strategy for experimental design by optimizing expensive black-box functions [9–11]. But one problem is that users need to specify a bounded region to restrict the search of the objective function extrema. When tackling a completely new problem, users do not have prior knowledge, hence there is no guarantee that an arbitrarily defined search space contains the global optimum. Thus application of the Bayesian optimization framework when the search region is unknown remains an open challenge [16]. One approach is to use a regularized acquisition function such that its maximum can never be at infinity - hence no search space needs to be declared and an unconstrained optimizer can be used [16]. Other approaches use volume expansion, i.e. starting from the user-defined region, the search space is expanded during the optimization. The simplest strategy is to repeatedly double the volume of the search space every several iterations [16]. Nguyen et al suggest a volume expansion strategy based on the evaluation budget [12]. All these methods require users to specify critical parameters - as example, regularization parameters [16], or growth rate, expansion frequency (volume doubling) [16] or budget [12]. These parameters are difficult to specify in practice. Additionally, [12] is computationally expensive and the user-defined search space needs to be close to the global optimum. In this paper, we propose a systematic volume expansion strategy for the Bayesian optimization framework wherein the search space is unknown. Without any prior knowledge about the objective function argmax or strict assumptions on the behavior of the objective function, it is impossible to guarantee the global convergence when the search space is continuously expanded. To circumvent this problem, we consider the setting where we achieve the global -accuracy condition, that is, we aim to find a point whose function value is within of the objective function global maximum. Our volume expansion strategy is based on two guiding principles: 1) The algorithm can reach a point whose function value is within of the objective function maximum in one expansion, and, 2) the search space should be minimally expanded so that the algorithm does not spend unnecessary 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. evaluations near the search space boundary. As the objective function is unknown, it is not possible to compute this ideal expansion region. Using the GP-UCB acquisition function as a surrogate, this region is computed as one that contains at least one point whose acquisition function value is within of the acquisition function maximum. However, by using a surrogate to approximate the objective function, there is no guarantee that we can achieve the global -accuracy within one expansion. Hence multiple expansions are required, and a new expansion is triggered when the local -accuracy is satisfied, i.e. when the algorithm can find a point whose function value is within of the objective function maximum in the current search space. Analytical expressions for the size of the new expansion space and when to trigger the expansion are derived. The guarantees for the -accuracy condition, however, now lapses in the expanded region, and so we adjust the acquisition function appropriately to maintain the guarantee. Finally, we provide theoretical analysis to show that our proposed method achieves the global -accuracy condition after a finite number of iterations. We demonstrate our algorithm on five synthetic benchmark functions and three real hyperparameter tuning tasks for common machine learning models: linear regression with elastic net, multilayer perceptron and convolutional neural network. Our experimental results show that our method achieves better function values with fewer samples compared to state-of-the-art approaches. In summary, our contributions are: • Formalising the analysis for Bayesian optimization framework in an unknown search space setting, and introducing -accuracy as a way to track the algorithmic performance; • Providing analytic expressions for how far to expand the search space and when to expand the search space to achieve global -accuracy; • Deriving theoretical global -accuracy convergence; and, • Demonstrating our algorithm on both synthetic and real-world problems and comparing it against state-of-the-art methods. Our method differs from previous works in that 1) our method does not require any algorithmic parameters, automatically adjusting both when to trigger the expansion and by how much to expand, and, 2) our approach is the only one to guarantee the global -accuracy condition. This is because we guarantee the local -accuracy condition in each search space, thus eventually the global - accuracy is achieved. Without this local guarantee, the suggested solution cannot be guaranteed to reach global -accuracy. The regularization [16] and the filtering method [12] require the global optimum to be within a bound constructed by either the user specified regularizer or the budget. The volume doubling method [16] can continue to expand the search space to infinity, however, the local -accuracy condition is not guaranteed in each search space. The paper is organized as follows. Section 2 gives an overview of Bayesian optimization and discusses some of the related work. Section 3 describes the problem setup. Section 4 proposes our new expansion strategy for the Bayesian optimization framework when the search space is unknown. A theoretical analysis for our proposed method is presented in Section 5. In Section 6, we demonstrate the effectiveness of our algorithm by numerical experiments. Finally, Section 7 concludes the paper. 2 Background and Related Work 2.1 Background Bayesian optimization is a powerful optimization method to find the global optimum of an unknown objective function f(x) by sequential queries [9–11, 17, 18]. First, at time t, a surrogate model is used to approximate the behaviour of f(x) using all the current observed data Dt−1 = {(xi, yi)}ni=1, yi = f(xi) + ξi, where ξi ∼ N (0, σ2) is the noise. Second, an acquisition function is constructed from the surrogate model that suggests the next point xitr to be evaluated. The objective function is then evaluated at xitr and the new data point (xitr, yitr) is added to Dt−1. These steps are conducted in an iterative manner to get the best estimate of the global optimum. The most common choice for the surrogate model used in Bayesian optimization is the Gaussian Process (GP) [14]. Assume the function f follows a GP with mean function m0(x) and covariance function k(x, x′), the posterior distribution of f given the observed data Dt−1 = {(xi, yi)}ni=1 is a GP with the following posterior mean and variance, µt−1(x) = m0(x) + k|Dt−1|(x) T (K|Dt−1| + σ 2I|Dt−1|) −1y|Dt−1|, σ2t−1(x) = k(x, x)− k|Dt−1|(x) T (K|Dt−1| + σ 2I|Dt−1|) −1k|Dt−1|(x), (1) where y|Dt−1| = [y1, . . . , y|Dt−1|] T , k|Dt−1|(x) = [k(x, xi)] |Dt−1| i=1 , K|Dt−1| = [k(xi, xj)]i,j , I|Dt−1| is the |Dt−1| × |Dt−1| identity matrix and |Dt−1| denotes the cardinality of Dt−1. To aid readability, in the sequel we remove the notation that shows the dependence of k,K, I, y on |Dt−1|. There are many existing acquisition functions [6, 7, 10, 11, 20] and in this paper, we focus only on the GP-UCB acquisition function [1, 2, 5, 19]. The GP-UCB acquisition function is defined as, αUCB(x;Dt−1) = µt−1(x) + √ βtσt−1(x), (2) where µt−1(x), σt−1(x) are the posterior mean and standard deviation of the GP given observed data Dt−1 and βt ≥ 0 is an appropriate parameter that balances the exploration and exploitation. Given a search domain, {βt} can be chosen as in [19] to ensure global convergence in this domain. 2.2 Related Work All the work related to the problem of Bayesian optimization with unknown search space have been described in Section 1. There is the work in [3] introduces the term -accuracy. However, their purpose is to unify the Bayesian optimization and the Level-set estimation framework. 3 Problem Setup We wish to find the global argmax xmax of an unknown objective function f : Rd 7→ R, whose argmax is at a finite location, i.e. xmax = argmaxx∈S∗ f(x), (3) where S∗ is a finite region that contains the argmax of the function f(x). In practice, the region S∗ is not known in advance, so users need to identify a search domain Suser which is likely to contain the argmax of f(x). This search domain can be set arbitrarily or based on limited prior knowledge. Thus there is no guarantee that Suser contains the global optimum of the objective function. In the trivial cases when the search space S∗ is known or when S∗ ⊂ Suser, the global convergence can be guaranteed through classical analysis [4, 19]. Here, we consider the general case when S∗ may or may not be a subset of Suser. Without any prior knowledge about S∗ or strict assumptions on the behavior of the objective function, it is impossible to guarantee the global convergence. Therefore, in this work, instead of solving Eq. (3), we consider the setting where we achieve the global -accuracy condition. That is, for a small positive value , we find a solution x which satisfies, f(xmax)− f(x ) ≤ . (4) 4 Proposed Approach We make some mild assumptions to develop our main results. Assumption 4.1 The prior mean function m0(x) = 0. This is done by subtracting the mean from all observations and is common practice. Assumption 4.2 The kernel k(x, x′) satisfies, (1) when ‖x − x′‖2 → +∞, k(x, x′) → 0; (2) k(x, x′) ≤ 1 ∀(x, x′) ; (3) k(x, x) = θ2, where θ ≥ 0 is the scale factor of the kernel function. Various kernels satisfy Assumption 4.2, e.g. the Matérn kernel, the Square Exponential kernel. As the function can always be re-scaled, condition 2 is met without loss of generality [15, 19]. Defining gk(γ): With these types of kernels, for all small positive γ, there always exists gk(γ) > 0, ∀x, x′ : ‖x− x′‖2 ≥ gk(γ), k(x, x′) ≤ γ. (5) The value of gk(γ) can be computed from γ and the kernel covariance function k(x, x′) i.e. for Squared Exponential kernel kSE(x, x′) = θ2exp(−‖x− x′‖22/(2l2)), gk(γ) will be √ 2l2log(θ2/γ). Assumption 4.3 The kernel k(x, x′) is known in advance or can be learned from the observations. 4.1 Proposed Expansion Strategy The ideal expansion strategy should satisfy two characteristics: 1) The algorithm can reach the global -accuracy condition in one expansion, and, 2) the search space should be minimally expanded so that the algorithm does not spend unnecessary evaluations near the search space boundary. Since we have a black-box objective function, it is not possible to compute the ideal expansion space Sideal directly. Let the exploration-exploitation parameters {βt} be chosen to ensure the objective function is upper bounded by the GP-UCB acquisition function with high probability. Then we can estimate Sideal by a region S as a minimal region that contains at least one point whose acquisition function value is within from the acquisition function maximum, i.e. ∃xu ∈ S : |αUCB(xu;Dt−1)−maxx∈Rd αUCB(x;Dt−1)| ≤ . Due to the approximation, there is no guarantee we can achieve the global -accuracy in one expansion. Thus we need multiple expansions sequential. A new expansion is triggered when the local -accuracy is satisfied in the previous expansion. In the following, we first derive the value of the GP-UCB acquisition function when x→∞ (Proposition 4.1), and then use this value to derive analytical expressions for the size of the expansion space S (Theorem 4.1) and when to trigger a new expansion. Proposition 4.1 When x→∞, the GP-UCB acquisition function αUCB(x;Dt−1)→ √ βtθ, where βt is the exploration-exploitation parameter of the GP-UCB acquisition function and θ is the scale factor of the kernel function k(x, x′). Derivation of the expansion search space Our idea is to choose the region S such that S = Rd \ A, where 1) A contains all the points x that are far from all the current observations, and, 2) A := {x ∈ Rd : |αUCB(x;Dt−1)− √ βtθ| < /2}. Here, we will show that with this choice of S, there exists at least one point in S whose acquisition function value is within from the acquisition function maximum, given < | √ βtθ −minx∈Rd(αUCB(x;Dt−1))|. We consider three cases that can happen to the GP-UCB acquisition function (See Figure 1): • Case 1: The argmax of the GP-UCB acquisition function is at infinity. This means that the GP-UCB acquisition function maximum is equal to √ βtθ. As the GP-UCB acquisition function is continuous and < | √ βtθ −minx∈Rd(αUCB(x;Dt−1))|, hence, there exists a point xu such that αUCB(xu) = √ βtθ − /2. By the definition of S, it is straightforward that xu belongs to S , thus proving that there exists a point in S whose GP-UCB acquisition function value is within from the maximum of the acquisition function. • Case 2: The argmax of the GP-UCB acquisition function x′max is at a finite location and its acquisition function value is larger or equal √ βtθ + /2. It is straightforward to see that the argmax x′max belongs to the region S and this is the point that satisfies |αUCB(x′max;Dt−1)−maxx∈Rd αUCB(x;Dt−1)| ≤ . • Case 3: The GP-UCB acquisition function argmax is at a finite location and the acquisition function maximum is smaller than √ βtθ + /2. As the GP-UCB acquisition function is continuous and < | √ βtθ − minx∈Rd(αUCB(x;Dt−1))|, there exists a point xu ∈ S : αUCB(xu;Dt−1) = √ βtθ − /2. As maxx∈Rd αUCB(x;Dt−1) < √ βtθ + /2, it follows directly that |αUCB(xu;Dt−1)−maxx∈Rd αUCB(x;Dt−1)| ≤ . Theorem 4.1 now formally derives an analytical expression for one way to define region S. Algorithm 1 Bayesian optimization with unknown search space (GPUCB-UBO) 1: Input: Gaussian Process (GP)M, acquisition functions αUCB , αLCB , initial observationsDinit, initial search space Suser, function f , positive small threshold , evaluation budget T . 2: Output: Point x : max f(x)− f(x ) ≤ . 3: Initialize D0 = Dinit, S = Suser, β1, tk = 0. Update the GP using D0. 4: for t = 1, 2, . . . , T do 5: Set tlocal = t− tk 6: Compute xm = argmaxx∈S αUCB(x;Dt−1) 7: Set xt = xm, yt = f(xt). Update Dt = Dt−1 ∪ (xt, yt). 8: /∗ Compute the expansion trigger, the regret upper bound ∗/ 9: Compute rb = αUCB(xt;Dt−1)−maxx∈Dt αLCB(x;Dt−1) + 1/t2local 10: /∗ If expansion triggered, expand the search space ∗/ 11: if (rb <= ) | (t == 1) then 12: Compute the new search space S as defined in Theorem 4.1 13: Set tk = tk + tlocal 14: end if 15: /∗ Adjust the βt based on the search space ∗/ 16: Compute βt following Theorem 5.1 17: Update the GP using Dt. 18: end for Theorem 4.1 Consider the GP-UCB acquisition function αUCB(x;Dt−1). Let us define the region S = ⋃|Dt−1| i=1 Si, Si = {x : ‖x − xi‖2 ≤ d }, xi ∈ Dt−1, |Dt−1| is the cardinality of Dt−1, d = gk(min( √ ( √ βtθ /2− 2/16)/(|Dt−1|λmax)/ √ βt, 0.25 /max( ∑ zj≤0−zj , ∑ zj≥0 zj))) with gk(.) as in Eq. (5), λmax be the largest singular value of (K + σ2I)−1, and zj be the jth element of (K + σ2I)−1y. Given < | √ βtθ −minx∈Rd(αUCB(x;Dt−1))|, then there exists at least one point in S whose acquisition function value is within from the acquisition function maximum, i.e. ∃xu ∈ S : |αUCB(xu;Dt−1)−maxx∈Rd αUCB(x;Dt−1)| ≤ . Acquisition function adaption Let us denote Sk as the kth expansion search space (k ≥ 1). In each Sk, the parameter {βt} of the GP-UCB acquisition function needs to be valid to ensure the algorithm achieves the local -accuracy condition. Hence, a new {βt} is adjusted after each expansion. Details on how to compute the new {βt} are in Theorem 5.1. Triggering the next expansion To guarantee the global -accuracy condition, in each search space Sk, we aim to find an iteration Tk which satisfies rSk(Tk) = (maxx∈Sk f(x)−maxxi∈DTk f(xi)) ≤ before the next expansion. As we do not have maxx∈Sk f(x) and {f(xi)}, we bound rSk(t) by rb,Sk(t) = maxx∈Sk αUCB(x;Dt−1)+1/t2−maxx∈Dt αLCB(x;Dt−1), where αLCB(x;Dt−1) = µt−1(x)− √ βtσt−1(x). The next expansion is triggered when rb,Sk(t) reaches . Search space optimization The theoretical search space developed in Theorem 4.1 is the union of |Dt−1| balls. To suit optimizer input, this region is converted to an encompassing hypercube using, minxi∈Dt−1(x k i )− d ≤ xk ≤ maxxi∈Dt−1(xki ) + d , k = 1, d. (6) Further refinement of the implementation is provided in the supplementary material. Algorithm 1 describes the proposed Bayesian optimization with unknown search space algorithm. 5 Theoretical Analysis First, to ensure the validity of our algorithm, we prove that for a wide range of kernels, for any search space Sk and any positive , with a proper choice of {βt}, our trigger for expansion condition occurs with high probability. When this happens, the algorithm achieves the local -accuracy condition. Proposition 5.1 For any d-dimensional domain Sk with side length rk, for the kernel classes: finite dimensional linear, Squared Exponential and Matérn, suppose the kernel k(x, x′) satisfies the following condition on the derivatives of GP sample paths f : ∃ak, bk > 0, Pr{supx∈Sk |∂f/∂xj | > L} ≤ ak exp−(L/bk) 2 , j = 1, d. Pick δ ∈ (0, 1), and define βt = 2 log(t22π2/(3δ)) + 2d log(t2dbkrk √ log(4dak/δ)), then ∀ > 0, with probability larger than 1 − δ, there ∃Tk : ∀t ≥ Tk,maxx∈Sk αUCB(x;Dt−1) − maxx∈Dt αLCB(x;Dt−1) ≤ − 1/t2; and ∀t that satisfies the previous condition, maxx∈Sk f(x)−maxx∈Dt f(x) ≤ . Second, we prove that with a proper choice of {βt} and for a wide range class of kernels, after a finite number of iterations, our algorithm achieves the global -accuracy condition with high probability. Theorem 5.1 Denote {Sk} as the series of the expansion search space suggested by our algorithm (k ≥ 1). In each Sk, let Tk be the smallest number of iterations that satisfies our expansion triggered condition, i.e. rb,Sk(Tk) ≤ . Suppose the kernel k(x, x′) belong to the kernel classes listed in Proposition 5.1 and it satisfies the following condition on the derivatives of GP sample paths f : ∃ak, bk > 0, Pr{supx∈Sk |∂f/∂xj | > L} ≤ ak exp −(L/bk)2 , j = 1, d. Pick δ ∈ (0, 1), and define, βt = 2 log((t− ∑ j≤k−1 Tj) 22π2/(3δ)) + 2d log((t− ∑ j≤k−1 Tj) 2dbkrk √ log(4dak/δ)),∑ j≤k−1 Tj + 1 ≤ t ≤ ∑ j≤k Tj , k = 1, 2, .... Then running the proposed algorithm with the above choice of βt for a sample f of a GP with mean function zero and covariance function k(x, x′), after a finite number of iterations, we achieve global -accuracy with at least 1− δ probability, i.e. Pr{f(xmax)− f(xsuggest) ≤ } ≥ 1− δ, where xsuggest is the algorithm recommendation and xmax is the objective function global argmax. Discussion The difference between our method and previous works is that we guarantee the local -accuracy condition in each search space, eventually achieving the global -accuracy. Previous methods do not give this guarantee, and thus their final solution may not reach global -accuracy. 6 Experimental Evaluation We evaluate our method on five synthetic benchmark functions and three hyperparameter tuning tasks for common machine learning models. For problems with dimension d, the optimization evaluation budget is 10d (excluding initial 3d points following a latin hypercube sampling [8]). The experiments were repeated 30 and 20 times for the synthetic functions and machine learning hyperparameter tuning tasks respectively. For all algorithms, the Squared Exponential kernel is used, the GP models are fitted using the Maximum Likelihood method and the output observations {yi} are normalized yi ∼ N (0, 1). As with previous GP-based algorithms that use confidence bounds [3, 19], our theoretical choice of {βt} in Theorem 5.1 is typically overly conservative. Hence, following the suggestion in [19], for any algorithms that use the GP-UCB acquisition, we scale βt down by a factor of 5. Finally, for the synthetic functions, is set at 0.05 whist for the machine learning models, is set at 0.02 as we require higher accuracy in these cases. We compare our proposed method, GPUCB-UBO, with seven baselines: (1) EI-Vanilla: the vanilla Expected Improvement (EI); (2) EI-Volx2: the EI with the search space volume doubled every 3d iterations [16]; (3) EI-H: the Regularized EI with a hinge-quadratic prior mean where β = 1 and R is the circumradius of the initial search space [16]; (4) EI-Q: the Regularized EI with a quadratic prior mean where the widths w are set to those of the initial search space [16]; (5) GPUCB-Vanilla: the vanilla GP-UCB; (6) GPUCB-Volx2: the GP-UCB with the search space volume doubled every 3d iterations [16]; (7) GPUCB-FBO: the GP-UCB with the fitering expansion strategy in [12]. 6.1 Visualization We visualize our theoretical expansion search spaces derived in Theorem 4.1 on the Beale test function (Figure 2). We show the contour plots of the GP-UCB acquisition functions, and show both the observations (red stars) and the recommendation from the algorithm that correspond the acquisition function maximum (cyan stars). The initial user-defined search space (black rectangle) is expanded as per theoretical search spaces developed in Theorem 4.1 (yellow rectangles). Here we use Eq. (6) to plot the expansion search spaces, however, the spaces developed in Theorem 4.1 are tighter. The figure illustrates that when the argmax of the objective function is outside of the user-defined search space, with our search space expansion strategy, this argmax can be located within a finite number of expansions. 6.2 Synthetic Benchmarks We compare our method with seven baselines on five benchmark test functions: Beale, Eggholder, Levy 3, Hartman 3 and Hartman 6. We use the same experiment setup as in [16]. The length of the initial user-defined search space is set to be 20% of the length of the function domain - e.g. if the function domain is the unit hypercube [0, 1]d, then the initial search space has side length of 0.2. The center of this initial search space is placed randomly in the domain of the objective function. For each test function and algorithm, we run the experiment 30 times, and each time the initial search space will be placed differently. We plot the mean and the standard error of the best found values maxi=1,n f(xi) of each test function. Figure 3 shows that for most test functions, our method GPUCB-UBO achieves both better function values and in less iterations than other methods. For most test functions, our method is better than other six state-of-the-art approaches (except GPUCB-FBO) by a high margin. Compared with GPUCB-FBO, our method is better on the test functions Hartman3 and Hartman6 while performing similar on other three test functions. Note that the computation time of GPUCB-FBO is 2-3 times slower than our method and other approaches (see Table 1) because it needs an extra step to numerically solve several optimization problems to construct the new search space. Since we derive the expansion search spaces analytically, our method, in contrast, can optimize the acquisition function within these spaces without any additional computation. 6.3 Hyperparameter Tuning for Machine Learning Models Next we apply our method on hyperparameter tuning of three machine learning models on the MNIST dataset: elastic net, multilayer perceptron and convolutional neural network. With each model, the experiments are repeated 20 times and each time the initial search space will be placed differently. Elastic Net Elastic net is a regularized regression method that utilizes the L1 and L2 regularizers. In the model, the hyperparameter α > 0 determines the magnitude of the penalty and the hyperparameter l (0 ≤ l ≤ 1) balances between the L1 and L2 regularizers. We tune α in the normal space while l is tuned in an exponent space (base 10). The initial search space of α and l is randomly placed in the domain [−3,−1]× [0, 1] with side length to be 20% of the domain size length. We implement the Elastic net model using the function SGDClassifier in the scikit-learn package [13]. Multilayer Perceptron (MLP) We construct a 2-layer MLP with 512 neurons/layer. We optimize three hypeparameters: the learning rate l and the L2 norm regularization hyperparameters lr1 and lr2 of the two layers. All the hyperparameters are tuned in the exponent space (base 10). The initial search space is a randomly placed unit cube in the cube [−6,−1]3. The model is implemented using tensorflow. The model is trained with the Adam optimizer in 20 epochs and the batch size is 128. Convolutional Neural Network (CNN) We consider a CNN with two convolutional layers. The CNN architecture (e.g. the number of filters, the filter shape, etc.) is chosen as the standard architecture published on the official GitHub repository of tensorflow 1. We optimize three hyperparameters: the learning rate l and the dropout rates rd1, rd2 in the pooling layers 1 and 2. We tune rd1, rd2 in the normal space while l is tuned in an exponent space (base 10). The initial search space of rd1, rd2, l is randomly placed in the domain [0, 1]× [0, 1]× [−5,−1] with side length to be 20% of this domain size length. The network is trained with the Adam optimizer in 20 epochs and the batch size is 128. Given a set of hyperparameters, we train the models with this hyperparameter setting using the MNIST train dataset (55000 patterns) and then test the model on the MNIST test dataset (10000 patterns). Bayesian optimization method then suggests a new hyperparameter setting based on the 1https://github.com/tensorflow/tensorflow prediction accuracy on the test dataset. This process is conducted iteratively until the evaluation budget (10d evaluations) is depleted. We plot the prediction accuracy in Figure 4. For the Elastic net model, our method GPUCB-UBO performs similar to GPUCB-FBO while outperforming the other six approaches significantly. For the MLP model, GPUCB-UBO performs far better than other approaches. To be specific, after only 12 iterations, it achieves a prediction accuracy of 97.8% whilst other approaches take more than 24 iterations to get to this level. For the CNN model, GPUCB-UBO also outperforms other approaches by a high margin. After 30 iterations, it can provide a CNN model with prediction accuracy of 98.7%. 7 Conclusion We propose a novel Bayesian optimization framework when the search space is unknown. We guarantee that in iterative expansions of the search space, our method can find a point whose function value within of the objective function maximum. Without the need to specify any parameters, our algorithm automatically triggers a minimal expansion required iteratively. We demonstrate our method on both synthetic benchmark functions and machine learning hyper-parameter tuning tasks and demonstrate that our method outperforms state-of-the-art approaches. Our source code is publicly available at https://github.com/HuongHa12/BO_unknown_searchspace. Acknowledgments This research was partially funded by the Australian Government through the Australian Research Council (ARC). Prof Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006).
1. What are the limitations of the paper's empirical results and theoretical analysis? 2. How does the reviewer assess the practicality and efficiency of the proposed algorithm? 3. What are the concerns regarding the experimental results, particularly in terms of dimensionality and iteration number? 4. Are there any suggestions for improving the algorithm's performance in high-dimensional cases? 5. Is there a possibility of using alternative approaches, such as warped Gaussian processes, to enhance the search space reconstruction?
Review
Review The current version provides limited explanation and analysis of its empirical results and feels a bit preliminary now. The assumptions and implications of the bounds are not discussed. The theoretical choice of {beta_t} in Theorem 5.1 is typically overly conservative and thus, the practical schedule is used. This is common to GP-UCB based algorithms and should be criticized more because the theoretical analysis and the practical usage is different. In particular, it is more important to select the schedule of beta_t. My main concern is the experimental results. You only analyzed the low-dimensional case (d=2-3) in real problems with the small number of iterations. Optimization should be inefficient if you expand space. Extending the search space seems to need more trails of optimization. When you use the high-dimensional case (d=10) and more iterations case, e.g., 100 iterations, the existing method can outperform your method. Using warped Gaussian process in input space is an approach to reconstruct search space, which is a limited input search space but warped by bijective transformations. Snoek+, INPUT WARPING FOR BAYESIAN OPTIMIZATION OF NON-STATIONARY FUNCTIONS
NIPS
Title Anticipating Performativity by Predicting from Predictions Abstract Predictions about people, such as their expected educational achievement or their credit risk, can be performative and shape the outcome that they are designed to predict. Understanding the causal effect of predictions on the eventual outcomes is crucial for foreseeing the implications of future predictive models and selecting which models to deploy. However, this causal estimation task poses unique challenges: model predictions are usually deterministic functions of input features and highly correlated with outcomes. This can make the causal effect of predictions on outcomes impossible to disentangle from the direct effect of the covariates. We study this problem through the lens of causal identifiability. Despite the hardness of this problem in full generality, we highlight three natural scenarios where the causal effect of predictions can be identified from observational data: randomization in predictions, overparameterization of the predictive model deployed during data collection, and discrete prediction outputs. Empirically we show that given our identifiability conditions hold, standard variants of supervised learning that predict from predictions by treating the prediction as an input feature can find transferable functional relationships that allow for conclusions about newly deployed predictive models. These positive results fundamentally rely on model predictions being recorded during data collection, bringing forward the importance of rethinking standard data collection practices to enable progress towards a better understanding of social outcomes and performative feedback loops. 1 Introduction Predictions can impact sentiments, alter expectations, inform actions, and thus change the course of events. Through their influence on people, predictions have the potential to change the regularities in the population they seek to describe and understand. This insight underlies the theories of performativity [38] and reflexivity [62] that play an important role in modern economics and finance. Recently, Perdomo et al. [51] pointed out that the social theory of performativity has important implications for machine learning theory and practice. Prevailing approaches to supervised learning assume that features X and labels Y are sampled jointly from a fixed underlying data distribution that is unaffected by attempts to predict Y from X . Performativity questions this assumption and suggests that the deployment of a predictive model can disrupt the relationship between X and Y . Hence, changes to the predictive model can induce shifts in the data distribution. For example, consider a lender with a predictive model for risk of default – performativity could arise if individuals who are predicted as likely to default are given higher interest loans, which make default even more likely [41], akin to a self-fulfilling prophecy. In turn, a different predictive model that predicts smaller risk and suggests offering more low-interest loans could cause some individuals who previously looked risky 36th Conference on Neural Information Processing Systems (NeurIPS 2022). to be able to pay the loans back, which would appear as a shift in the relationship between features X and loan repayment outcomes Y . This performative nature of predictions poses a challenge to using historical data to predict the outcomes that will arise under the deployment of future models. 1.1 Our work In this work, we aim to understand under what conditions observational data is sufficient to identify the performative effects of predictions. Only when causal identifiability is established can we rely on data-driven strategies to anticipate performativity and reason about the downstream consequences of deploying new models. Towards this goal, we focus on a subclass of performative prediction problems in this paper where performative effects of predictions solely surface as a shift in the outcome variable, and the distribution over covariates X is unaffected by the prediction Ŷ . Our goal is to identify the expected counterfactual outcome MY (x, ŷ) , E[Y |X = x, do(Ŷ = ŷ)]. Understanding the causal mechanismMY is crucial for model evaluation, as well as model optimization. In particular, it allows for offline evaluation of the potential outcome Y of an individual X subject to a predictive model fnew with the prediction Ŷ = fnew(X) before actually deploying it. The need for observing predictions. We start by illustrating the hardness of performativity-agnostic learning by relating performative prediction to a concept shift problem. Using the specifics of the performative shift, we establish a lower bound on the extrapolation error of predicting Y from X under the deployment of a new model fnew that is different from the model ftrain deployed during data collection. In particular, the extrapolation error grows with the distance between the prediction functions of the two models and the strength of performativity. This lower bound on the extrapolation error demonstrates the necessity to take performativity into account for reliably predicting Y . Predicting from predictions. We then explore the feasibility of learning performative effects when the training data recorded the predictions and training data samples (X,Y, Ŷ ) are available. As an identification strategy for learningMY , we focus on building a meta machine learning model that predicts Y for an individual with features X , subjected to a prediction Ŷ . We term this data-driven strategy predicting from predictions; it treats the predictions as an input to the meta machine learning model. The meta model seeks to answer “what would the outcome be if we were to deploy a different prediction model?” Crucially, this “what if” question is causal in nature; it aims to understand the potential outcome under an intervention which is different from merely estimating the outcome variable in previously seen data. Whether such a transferable model is learnable depends on whether the training data provides causal identifiability [49] Only after causal identifiability is established can we rely on observational data to select and design optimal prediction models under performativity. Establishing identifiability. For our main technical results, we first show that, in general, observing Ŷ is not sufficient for identifying the causal effects of predictions. In particular, if the training data was collected under the deployment of a deterministic prediction function, the mechanismMY can not be uniquely identified. The reason is a lack of coverage in the training data as X and Ŷ are deterministically bound. Next, we establish several conditions under which observing Ŷ is sufficient for identifying MY . The first condition exploits the presence of randomness in the prediction. This randomness could be purposely built into the prediction for individual fairness, differential privacy, or other considerations. The second condition exploits the property that predictive models are often over-parameterized, which leads to incongruence in functional complexity between different causal paths, enabling the effects of predictions to be separated from other variables’ effects. The third condition takes advantage of discreteness in predictions such that performative effects can be disentangled from the continuous relationship between covariates and outcomes. Together, these results reveal that particularities of the performative prediction problem can enable us to recover the causal effect of predictions from observational data. In particular, we show that, under these conditions, standard supervised learning techniques can be used to find these transferable functional relationships by treating predictions as model inputs. Empirically, we demonstrate that supervised learning succeeds in findingMY even in finite samples. We conclude with a discussion of limitations and extensions of our work, pointing out potential violations of the modeling assumptions underlying our causal analysis and proposing directions for future work. 1.2 Broader context and related work The work by Perdomo et al. [51], initiated the discourse of performativity in the context of supervised learning by pointing out that the deployment of a predictive model can impact the data distribution we train our models on. Existing scholarship on performative prediction [c.f., 51, 42, 12, 44, 24, 26, 68, 45, 52, 31] has predominantly focused on achieving a particular solution concept with a prediction function that maps X to Y in the presence of unknown performative effects. We are interested in understanding the underlying causal mechanism of the performative distribution shift. Our work is motivated by the seemingly natural approach of lifting the supervised-learning problem and incorporating the prediction as an input feature when building a meta machine learning model for explaining Y . By establishing a connection to causal identifiability, our goal is to understand when such a data-driven strategy can help anticipate the down stream effects of predictions This work focuses on the setting where predictions lead to changes in the relationship between covariates X and label Y , while the marginal distribution P (X) over covariates is assumed to be fixed. This setting where performativity only surfaces in the label describes an interesting subclass of problems falling under the umbrella of performative (aka. model-induced or decision-dependent) distribution shifts [51, 37, 12]. Our assumptions are complementary to the strategic classification framework [8, 20] that focuses on a setting where performative effects concern P (X), while P (Y |X) is assumed to remain stable. Consequently, causal questions in strategic classification [e.g., 22, 3, 59] are concerned with identifying stable causal relationships between X and Y . Since we assume P (Y |X) can change (i.e. the true underlying ’concept’ determining outcomes can change), conceptually different questions emerge in our work. Similar in spirit to strategic classification, the work on algorithmic recourse and counterfactual explanations [32, 28, 65] focuses on the causal link between features and predictions, whereas we focus on the down-stream effects of predictions. There are interesting parallels between our work and related work on the offline evaluation of online policies [e.g., 35, 63, 36, 58]. In particular, [63] explicitly emphasize the importance of logging propensities of the deployed policy during data collection to be able to mitigate selection bias. In our work the deployed model can induce a concept shift. Thus, we find that additional information about the predictions of the deployed model needs to be recorded to be able to foresee the impact of a new predictive model on the conditional distribution P (Y |X), beyond enabling propensity weighting [55]. A notable work by [66] investigates how predictions at one time step impact predictions in future time steps. Complementary to these existing works we show that randomness in the predictive model is not the only way causal effects of predictions can be identified. For our theoretical results, we build on classical tools from causal inference [48, 57, 64]. In particular, we distill unique properties of the performative prediction problem to design assumptions for the identifiability of the causal effect of predictions. 2 The causal force of prediction Predictions can be performative and impact the population of individuals they aim to predict. Formulized it in the language of causal inference [48]: the deployment of a predictive model represents an intervention on a causal diagram that describes the underlying data generation process of the population. We will expand on this causal perspective to study an instance of ths performative prediction problem described below. 2.1 Prediction as a partial mediator Consider a machine learning application relying on a predictive model f that maps features X to a predicted label Ŷ . We assume the predictive model f is performative in that the prediction Ŷ = f(X) has a direct causal effect on the outcome variable Y of the individual it concerns. Thereby the prediction impacts how the outcome variable Y is generated from the features X . The causal diagram illustrating this setting is visualized in Figure 1. The features X ∈ X ⊆ Rd are drawn i.i.d. from a fixed underlying continuous distribution over covariates DX with support X . The outcome Y ∈ Y ⊆ R is a function of X , partially mediated by the prediction Ŷ ∈ Y . The prediction Ŷ is determined by the deployed predictive model f : X → Y . For a given prediction function f , every individual is assumed to be sampled i.i.d. from the data generation process described by the causal graph in Figure 1. We assume the exogenous noise ξY is zero mean, and ξf allows the prediction function to be randomized. Note that our model is not meant to describe performativity in its full generality (which includes other ways f may affect P (X,Y )). Rather, it describes an important and practically relevant class of performative feedback problems that are characterized by two properties: 1) performativity surfaces only in the label Y , and 2) performative effects are mediated by the prediction, such that Y ⊥ f | Ŷ , rather than dependent on the specifics of the decision rule. Application examples. Causal effects of predictions on outcomes have been documented in multiple contexts: A bank’s prediction about the client (e.g., his or her creditworthiness in applying for a loan) determines the interest rate assigned to them, which in turn changes a client’s financial situation [41]. Mathematical models that predict stock prices inform the actions of traders and thus heavily shape financial markets and economic realities [38]. Zillow’s housing price predictions directly impact sales prices [39]. Predictions about the severity of an illness play an important role in treatment decisions and hence the very chance of survival of the patient [34]. Another prominent example from psychology is the Pygmalion effect [56]. It refers to the phenomenon that high expectations lead to improved performance, which is widely documented in the context of education [6], sports [61], and organizations [16]. Examples of such performativity abound, and we hope to have convinced the reader that the performative effects in the label are important for algorithmic prediction. 2.2 Implications for performativity-agnostic learning Begin with considering the classical supervised learning task where Ŷ is unobserved. The goal is to learn a model h : X → Y for predicting the label Y from the features X . To understand the inherent challenge of classical prediction under performativity, we investigate the relationship between X and Y more closely. Specifically, the data generation process (Figure 1) implies that P (Y |X) = ∫ P (Y |Ŷ , X)P (Ŷ |X)dŶ . (4) This expression makes explicit how the relationship between X and Y that we aim to learn depends on the predictive model governing P (Ŷ |X). As a consequence, when the deployed predictive model at test time differs from the model at training time, performative effects surface as concept shift [17]. Such distribution shift problems are known to be intractable without structural knowledge about the shift, implying that we can not expect h to generalize to distributions induced by future model deployments. Let us inspect the resulting extrapolation gap in more detail and put existing positive results on performative prediction into perspective. Extrapolation loss. We illustrate the effect of performativity on predictive performance using a simple instantiation of the structural causal model from Figure 1. Therefore, assume a linear performative effect of strength α > 0 and a base function g1 : X → Y g(X, Ŷ ) := g1(X) + αŶ . (5) Now, assume we collect training data under the deployment of a predictive model fθ and validate our model under the deployment of fφ. We adopt the notion of a distribution map from Perdomo et al. [51] and write DXY (f) for the joint distribution over (X,Y ) surfacing from the deployment of a model f . We assess the quality of our predictive model h : X → Y over a distribution DXY (f) induced by f via the loss function ` : Y × Y → R and write Rf (h) := Ex,y∼DXY (f)`(h(x), y) for the risk of h on the distribution induced by f . We use h∗f for the risk minimizer h ∗ f := argminh∈HRf (h), and H for the hypothesis class we optimize over. Proposition 1 bounds the extrapolation loss and can be viewed as a concrete instantiation of the more general extrapolation bounds for performative prediction discussed in [37] within the feedback model from Figure 1. Proposition 1 (Hardness of performativity-agnostic prediction). Consider the data generation process in Figure 1 with g given in (5) and fθ, fφ being deterministic functions. Take a loss function ` : Y × Y → R that is γ-smooth and µ-strongly convex in its second argument. Let h∗fθ be the risk minimizer over the training distribution and assume the problem is realizable, i.e., h∗fθ ∈ H. Then, we can bound the extrapolation loss of h∗fθ on the distribution induced by fφ as γ 2 α2d2DX (fθ, fφ) ≥ ∆Rfθ→fφ(h ∗ fθ ) ≥ µ 2 α2d2DX (fθ, fφ) (6) where d2DX (fθ, fφ) := Ex∼DX (fθ(x)− fφ(x)) 2 and ∆Rfθ→fφ(h) := Rfφ(h)− Rfθ (h). The extrapolation loss ∆Rfθ→fφ(h ∗ fθ ) is zero if and only if either the strength of performativity tends to zero (α → 0), or the predictions of the two predictors fθ and fφ are identical over the support of DX . If this is not the case, an extrapolation gap is inevitable. This elucidates the fundamental hardness of performative prediction from feature, label pairs (X,Y ) when performative effects disrupt the causal relationship between X and Y . The special case where α = 0 aligns with the assumption of classical supervised learning, in which there is no performativity. This may hold in practice if the predictive model is solely used for descriptive purposes, or if the agent making the prediction does not enjoy any economic power [21]. The second special case where the extrapolation error is small is when d2DX (fθ, fφ)→ 0. In which case DXY (fθ) and DXY (fφ) are equal in distribution and hence exhibit the same risk minimizer. Such a scenario can happen, for example, if the model fφ is obtained by retraining fθ on observational data and a fixpoint is reached (fθ = h∗fθ ). The convergence of policy optimization strategies to such fixpoints (perfromative stablity) has been studied in prior work [e.g., 51, 42, 12] and enabled optimality results even in the presence of performative concept shifts, relying on the target model fφ not being chosen arbitrarily, but based on a pre-specified update strategy. 3 Identifying the causal effect of prediction Having illustrated the hardness of performativity-agnostic learning, we explore under what conditions incorporating the presence of performative predictions into the learning task enables us to anticipate the perfromative effects of Ŷ on Y . Towards this goal, we assume that the mediator Ŷ in Figure 1 is observed—the prediction takes on the role of the treatment in our causal analysis and we can not possibly hope to estimate the treatment effect of a treatment that is unobserved. 3.1 Problem setup Assume we are given access to data points (x, ŷ, y) generated i.i.d. from the structural causal model in Figure 1 under the deployment of a prediction function fθ. From this observational data, we wish to estimate the expected potential outcome of an individual under the deployment of an unseen (but known) predictive model fφ. We note that given our causal graph, the implication of intervening on the function f can equivalently be explained by an intervention on the prediction Ŷ . Thus, we are interested in identifying the causal mechanism: MY (x, ŷ) := E[Y |X = x,do(Ŷ = ŷ)]. (7) Unlike P (Y |X), the mecahnismMY is invariant to the changes in the predictive model governing P (Ŷ |X). Thus, being able to identifyMY will allow us to make inferences about the potential outcome surfacing from planned model updates beyond explaining patterns in historical data. We can evaluateMY to infer y for any x at ŷ = fφ(x) for fφ being the model of interest. For simplicity of notation, we will write D(fθ) to denote the joint distribution over (X, Ŷ , Y ) of the observed data collected under the deployment of the predictive model fθ. We sayMY can be identified, if it can uniquely be expressed as a function of observed data. More formally: Definition 1 (identifiability). Given a predictive model f , the causal graph in Figure 1, and a set of assumptions A. We sayMY is identifiable from D(f), if for any function h that complies with assumptions A and h(x, ŷ) = MY (x, ŷ) for pairs (x, ŷ) ∈ supp(DXY (f)) it must also hold that h(x, ŷ) =MY (x, ŷ) for all pairs (x, ŷ) ∈ X × Y . Without causal identifiability, there might be models h′ 6=MY that explain the training distribution equally well but do not transfer to the distribution induced by the deployment of a new model. Causal identifiability is crucial for enabling extrapolation. It quantifies the limits of what we can infer given access to the training data distribution, ignoring finite sample considerations. Identification with supervised learning. Identifiability ofMY from samples of D(fθ) implies that the historical data collected under the deployment of fθ contains sufficient information to recover the invariant relationship (7). As a concrete identification strategy, consider the following standard variant of supervised learning that takes in samples (x, ŷ, y) and builds a meta-model that predicts Y from X, Ŷ by solving the following risk minimization problem hSL := argmin h∈H E(x,ŷ,y)∼D(fθ) [ (h(x, ŷ)− y)2 ] . (8) whereH denotes the hypothesis class. We consider the squared loss for risk minimization because it pairs well with the exogeneous noise ξY in (3) being additive and zero mean. The strategy (8) is an instance of what we term predicting from predictions. Lemma 2 provides a sufficient condition for the supervised learning solution hSL to recover the invariant causal quantityMY . Lemma 2 (Identification strategy). Consider the data generation process in Figure 1 and a set of assumptions A. Given a hypothesis classH such that every h ∈ H complies with A and the problem is realizable, i.e., MY ∈ H. Then, if MY is causally identifiable from D(fθ) given A, the risk minimizer hSL in (8) will coincide withMY . 3.2 Challenges for identifiability The main challenge for identification ofMY from data is that in general, the prediction rule fθ which produces Ŷ is a deterministic function of the covariates X . This means that, for any realization of X , we only get access to one Ŷ = fθ(X) in the training distribution, which makes it challenging to disentangle the direct and the indirect effects of X on Y . To illustrate this challenge, consider the function h(x, ŷ) :=MY (x, fθ(x)) that ignores the input parameter ŷ and only relies on x for explaining the outcome. This function explains y equally well and can not be differentiated from MY based on data collected under the deployment of a deterministic prediction rule fθ. The problem is akin to fitting a linear regression model to two perfectly correlated covariates. More broadly, this ambiguity is due to what is known as a lack of overlap (or lack of positivity) in the literature of causal inference [47, 23]. In the covariate shift literature, the lack of overlap surfaces when the covariate distribution violates the common support assumption and the propensity scores are not well-defined (see e.g., Pan and Yang [46]). This problem renders causal identification and thus data-driven learning of performative effects from deterministic predictions fundamentally challenging. Proposition 3 (Nonidentifiability from deterministic predictions). Consider the structural causal model in Figure 1. Assume Y non-trivially depends on Ŷ , and the set Y is not a singleton. Then, given a deterministic prediction function f , the mechanismMY is not identifiable from D(f). The identifiability issue persists as long as the two variables X , Ŷ are deterministically bound and there is no incongruence or hidden structure that can be exploited to disentangle the direct effect of X on Y from the indirect effect mediated by Ŷ . In the following, we focus on particularities of prediction problems and show how they allow us to identifyMY . 3.3 Identifiability from randomization We start with the most natural setting that provides identifiability guarantees: randomness in the prediction function fθ. Using standard arguments about overlap [47] we can identifyMY (x, ŷ) for any pair x, ŷ with positive probability in the data distribution D(fθ) from which the training data is sampled. To relate this to our goal of identifying the outcome under the deployment of an unseen model fφ we introduce the following definition: Definition 2 (output overlap). Given two predictive models fθ, fφ, the model fφ is said to satisfy output overlap with fθ, if for all x ∈ X and any subset Y ′ ⊆ Y with positive measure, it holds that P[fφ(x) ∈ Y ′] P[fθ(x) ∈ Y ′] > 0. (9) In particular, output overlap requires the support of the new model’s predictions fφ(x) to be contained in the support of fθ(x) for every potential x ∈ X . The following proposition takes advantage of the fact that the joint distribution over (X,Y ) is fully determined by the deployed model’s predictions to relate output overlap to identification: Proposition 4. Given the causal graph in Figure 1, the mechanismMY (x, ŷ) is identifiable from D(fθ) for any pair x, ŷ with ŷ = fφ(x), as long as fφ is a prediction function that satisfies output overlap with fθ. Proposition 4 allows us to pinpoint the models fφ to which we can extrapolate to from data collected under fθ. Furthermore, it makes explicit that for collecting data to learn about performative effects, it is ideal to deploy a predictor fθ that is randomized so that the prediction output has full support over Y for any x. Such a model would generate a dataset that guarantees global identification ofMY over X × Y and thus robust conclusions about any future deployable model fφ. One interesting and relevant setting that satisfies this property is the differentially private release of predictions through an additive Laplace (or Gaussian) noise mechanism applied to the output of the prediction function [13].1 While standard in the literature, a caveat of identification from randomization is that there are several reasons a decision-maker may choose not to deploy a randomized prediction function in performative environments, including negative externalities and concerns about user welfare [29], but also business interests to preserve consumer value of the prediction-based service offered. In the context of our credit scoring example, random predictions would imply that interest rates are randomly assigned to applicants in order to learn how the rates impact their probability of paying back. We can not presently observe this scenario, given regulatory requirements for lending institutions. 3.4 Identifiability through overparameterization The following two sections consider situations where we can achieve identification, without randomization, from data collected under a deterministic fθ. Our first result exploits incongruences in functional complexity arising from machine learning models that are overparameterized [e.g. 30]. By overparameterization, we refer to the fact that the representational complexity of the model is larger than the underlying concept it needs to describe. Assumption 1 (overparameterization). We say a function f is overparameterized with respect to G over X if there is no function g′ ∈ G and c ∈ R such that f(x) = c · g′(x) for all x ∈ X . A challenge for identification is that for deterministic fθ the prediction can be reconstructed from X without relying on Ŷ , and thus h(x, ŷ) =MY (x, fθ(x)) can not be differentiated fromMY based on observational data. However, note that this ambiguity relies on there being an admissable h such that h(·, ŷ) for a fixed ŷ can represent fθ. If fθ is overparameterized with respect to the hypothesis classH, this ambiguity is resolved. Let us make this intuition concrete with an example: Example 3.1. Assume the structural equation for y in Figure 1 is g(x, ŷ) = αx + βŷ for some unknown α, β. Consider prediction functions fθ of the following form fθ(x) = γx2 + ξx for some γ, ξ ≥ 0. ConsiderH be the class of linear functions. Then, any admissable estimate h ∈ H takes the form h(x, ŷ) = α′x+ β′ŷ. For h to be consistent with observations we need α′ + β′ξ = α+ βξ and β′γ = βγ. This system of equations has a unique solution as long as γ > 0 which corresponds to the case where fθ is overparameterized with respect to H. In contrast, for γ = 0 the function h(x, ŷ) = (α+ βξ)x would explain the training data equally well. The following result generalizes this argument to separable functions. Proposition 5. Consider the structural causal model in Figure 1 where fθ is a deterministic function. Assume that g can be decomposed as g(X, Ŷ ) = g1(X) + αŶ for some α > 0 and g1 ∈ G, where the function class G is closed under addition (i.e. g1, g2 ∈ G ⇒ a1 · g1 + a2 · g2 ∈ G ∀a1, a2 ∈ R). Let H contain functions that are separable in X and Ŷ , linear in Ŷ , and ∀h ∈ H it holds that h(·, ŷ) ∈ G for a fixed ŷ. Then, if fθ is overparameterized with respect to G over the support of DX , MY is identifiable from D(fθ). 3.5 Identifiability from classification A second ubiquitous source of incongruence that we can exploit for identification is the discrete nature of predictions in the context of classification. The resulting discontinuity in the relationship between X and Ŷ enables us to disentangleMY from the direct effect of X on Y . This identification strategy is akin to the popular regression discontinuity design [33] and relies on the assumption that all other variables in X are continuously related to Y around the discontinuities in Ŷ . 1In Appendix B we discuss two additional natural sources of randomness (randomized decisions and noisy measurements of covariates) that can potentially help identification with appropriate side-information. Proposition 6. Consider the structural causal model in Figure 1 where fθ is a deterministic function. Assume that the structural equation for Y is separable g(X, Ŷ ) = g1(X) + g2(Ŷ ),∀X, Ŷ for some differentiable functions g1 and g2. Further, suppose X is a continuous random variable and Ŷ is a discrete random variable that takes on at least two distinct values with non-zero probability. Then, MY is identifiable from D(fθ). Similar to Proposition 5, the separability assumption together with incongruence provides a way to disentangle the direct effect from the indirect effect of X on Y . Separability is necessary in order to achieve global identification guarantees without randomness, the identification of entangled components without overlap is fundamentally hard. Thus, under violations of the separability assumptions, we can only expect the separable components of g to be correctly identified. Similarly, a regression discontinuity design only enables the identification of the causal effect locally around the discontinuity. Extrapolation away from the decision boundary to models fφ that are substantially different from fθ increasingly relies on separability to hold true. 4 Empirical evaluation We investigate empirically how well the supervised learning solution hSL in (8) is able to identify the causal mechanismMY from observational data in practical settings with finite data. Methodology. We generated semi-synthetic data for our experiments, using a Census income prediction dataset from folktables.org [11]. Using this dataset as a starting point, we simulate a training dataset and test dataset with distribution shift as follows: First, we choose two different predictors fθ and fφ to predict a target variable of interest (e.g. income) from covariates X (e.g. age, occupation, education, etc.). If not specified otherwise, fθ is fit to the original dataset to minimize squared error, while fφ is trained on randomly shuffled labels. Next, we posit a function g for simulating the performative effects. Then, we generate a training dataset of (X, Ŷ , Y ) tuples from the causal model in Figure 1, using the covariates X from the original data, g, and fθ to generate Ŷ and Y . Similarly, we generate a test dataset of (X, Ŷ , Y ) tuples, using X, g, fφ. We assess how well supervised methods learn transferable functional relationships by fitting a model hSL to the training dataset and then evaluating the root mean squared error (RMSE) for regression and the accuracy for classification on the test dataset. In our figures, we visualize the standard error from 10 replicates with different random seeds and we compare it to an in-distribution baseline trained and evaluated on samples of D(fφ). If not specified otherwise we use N = 200, 000 samples. 4.1 Necessity of identification guarantees for supervised learning We start by illustrating why our identification guarantees are crucial for supervised learning under performativity. Therefore, we instantiate the structural equation g in Figure 1 as g(X, Ŷ ) = g1(X) + αŶ (10) with g1(X) = β>X and ξY ∼ N (0, 1). The coefficients β are determined by linear regression on the original dataset. The hyperparameter α quantifies the performativity strength that we vary in our experiments. The predictions Ŷ are generated from a linear model fθ that we modify to illustrate the resulting impact on identifiability. We optimize hSL in (8) overH being the class of linear functions. We start by illustrating a failure mode of supervised learning in a non-identifiability setting (Proposition 3). Therefore, we let fθ be a deterministic linear model fit to the base dataset (fθ(X) ≈ β>X). This results inMY not being identifiable from D(fθ). In Figure 2(a) we can see that supervised learning indeed struggles to identify a transferable functional relationship from the training data. The meta model returns hSL(X, Ŷ ) = (1 + α)Ŷ , instead of identifying g, which leads to a high extrapolation error independent of the strength of performativity. While we only show the error for one fφ in Figure 2(a), the error grows with the distance d2Dx(fθ, fφ). In contrast, when the feature Ŷ is not included, the supervised learning strategy returns hSL(X) = (1 + α)β>X . The extrapolation loss of this performativity-agnostic model scales with the strength of performativity (Proposition 1) and is thus strictly smaller than the error of the model that predicts from predictions. Next, we move to the regime of our identification results (Proposition 4-6). Therefore, we modify the way the predictions in the training data are generated. In Figure 2(b) we use additive Gaussian noise to determine the predictions as Ŷ = fθ(X) + η with η ∈ N (0, σ2). In Figure 2(c) we augment the input to fθ with second-degree polynomial features to achieve overparameterization. In Figure 2(d) we round the predictions of fθ to obtain discrete values. In all three cases, including Ŷ as a feature is beneficial and allows the model to match in-distribution accuracy baselines, closing the extrapolation gap that is inevitable for performativity-agnostic prediction. 4.2 Strength of incongruence and finite samples We next conduct an ablation study and investigate how the degree of overparameterization and the noise level for randomized fθ impacts the extrapolation performance of supervised learning. Therefore, we consider the setup in (10) with a general function g1. We fix the level of performativity at α = 0.5 for this experiment. We optimize hSL in (8) overH (which we vary). In Figure 3(a) we investigate the effect of overparameterization of fθ on the extrapolation error of hSL. We choose fully connected neural networks with a single hidden layer to represent the functions g1, fθ and hSL. For g1 andH we take a neural network with m = 3 units in the hidden layer. The model g1 is fit it to the original dataset. We vary the number of units in the hidden layer of fθ, denoted mθ. As expected, the extrapolation error decreases with the complexity of fθ. As soon as mθ > mφ there is a significant benefit to including predictions as features. In this regime,MY becomes identifiable as Proposition 5 suggests. In turn, without access to Ŷ the model suffers an inevitable extrapolation gap due to a concept shift that is independent of the properties of fθ. In Figure 2(b) we investigate the effect of the magnitude of additive noise added to the predictions. HereH and g1 are linear functions. We have Ŷ = fθ(X) + βη with η ∈ N (0, 1) and we vary the noise level β. We see that even small amounts of noise are sufficient for identification and adding Ŷ as a feature to our meta-machine lenaring model is effective as soon as the noise in fθ is non-zero. In Figure 2(c) we fix the noise level at α = 0.5 and vary the number of samples N . We find that only moderate dataset sizes are necessary for predicting from predictions to approximateMY in our identifiable settings. 5 Discussion This paper focused on identifying the causal effect of predictions on outcomes from observational data. We point out several natural situations where this causal question can be answered, but we also highlight situations where observational data is not sufficiently informative to reason about performative effects. By establishing a connection between causal identifiability and the feasibility of anticipating performative effects using data-driven techniques, this paper contributes to a better understanding of the suitability of supervised learning techniques for explaining social effects arising from the deployment of predictive models in economically and socially relevant applications. We hope the positive results in this work serve as a message for data-collection: only if predictions are observed, they can be incorporated to anticipate the performative effects of future model deployments. Thus, access to this information is crucial for an analyst hoping to understand the effects of deployed predictive models, an engineer hoping to foresee consequences of model updates, or a researcher studying performative phenomena. To date, such data is scarcely available in benchmark datasets, hindering the progress towards a better understanding of performative effects, essential for the reliable deployment of algorithmic systems in the social world. At the same time we have shown that the deterministic nature of prediction poses unique challenges for causal identifiability even if Ŷ is observed. Thus, the success of observational designs (as shown in our empirical investigations) is closely tied to the corresponding identifiability conditions being satisfied. Our results must not be understood as a green-light to justify the use of supervised learning techniques to address performativity in full generality beyond the scope of our theoretical results. Limitations and Extensions. The central assumption of our work is the causal model in Figure 1. While carving out a rich and interesting class of performative prediction problems that allows us to articulate the challenges of covariates and predictions being coupled, it can not account for all mechanisms of performativity. This in turn gives rise to interesting questions for follow-up studies. A first neglected aspect is performativity through social influence. Our causal model, relies on the stable unit treatment value assumption (SUTVA) [23]. There is no possibility for the prediction of one individual to impact the outcome of his or her peers. Such an individualistic perspective is not unique to our paper but prevalent in existing causal analyses and model-based approaches to performative prediction and strategic classification [e.g., 20, 25, 43, 3, 18, 22]. Spillover effects [cf. 60, 64, 1, 40] are yet unexplored in the context of performative prediction. Nevertheless, they have important implications for how causal effects should be estimated and interpreted. In the context of our work they imply that an intervention on f can no longer be explaind solely by changing an individual’s prediction. As a result, approaches for microfounding performative effect based on models learned from simple, unilateral interventions on an individual’s prediction result in different causal estimates than supervised learning based methods for identification as studied in this work. A preliminary study included in Appendix C shows that data-driven techniques can pick up on interference patterns in the data and benefit from structural properties such as network homophily [19], whereas individualistic modeling misses out on the indirect component arising from neighbors influencing each other. A second aspect is performativity in non-causal prediction. Our model posits that prediction is solely based on features X that are causal for the outcome Y . This is a desirable situation in many practical applications because causal predictions disincentivize gaming of strategic individuals manipulating their features [43, 3] and offer explanations for the outcome that persist across environments [54, 7]. Nevertheless, non-causal variables are often included as input features in practical machine learning prediction tasks. Establishing a better understanding for the implications of the resulting causal dependencies due to performativity could be an important direction for future work. Finally, performative effect can also lead to covariate shift and impact the joint distribution P (X,Y ) = P (Y |X)P (X) over covariates and labels. We assumed that performative effects only surface in P (Y |X). For our theoretical results, this implied that overlap in the X variable across environments is trivially satisfied, which enabled us to pinpoint the challenges of learning performative effects due to the coupling between X and Ŷ . For establishing identification in the presence of a causal arrow fθ → X additional steps are required to ensure identifiability. Acknowledgement The authors would like to thank Moritz Hardt and Lydia Liu for many helpful discussions throughout the development of this project, Tijana Zrnic, Krikamol Muandet, Jacob Steinhardt, Meena Jagadeesan and Juan Perdomo for feedback on the manuscript, and Gary Cheng for helpful discussions on differential privacy. We are also grateful for a constructive discourse and valuable feedback provided by the reviewers that greatly helped improve the manuscript. 6 Paper checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [Yes] (c) Did you discuss any potential negative societal impacts of your work? [Yes] see Appendix F. (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] (b) Did you include complete proofs of all theoretical results? [Yes] 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experi- mental results (either in the supplemental material or as a URL)? [Yes] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] (c) Did you report error bars (e.g., with respect to the random seed after running experi- ments multiple times)? [Yes] (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [Yes] (c) Did you include any new assets either in the supplemental material or as a URL? [Yes] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [Yes] (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [Yes] 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the main contribution of the paper regarding predictive models and decision-making? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and relevance to real-world scenarios? 3. What are the weaknesses of the paper, especially regarding its assumptions and potential limitations in more complex situations? 4. Can the author provide additional insights into the causal structure and separability assumptions made in the paper? 5. How might the approach be extended or modified to handle more intricate causal relationships or multiple variables?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The setting of the paper is about making predictions from predictions -- given a deployed model and decision subjects may best respond to it, how will their target variable Y change. The goal for the authors is to predict the impact of a new model deployment before actually deploying it. Instead of trying to come up with an optimal solution (e.g. performative optimality), the authors are interested in understanding the underlying causal mechanism of the distribution shifts. Strengths And Weaknesses I find the problem of identifying E [ Y | d o ( Y ^ = y ) , X ] in the performative prediction setting interesting. Overall the paper makes a novel contribution by providing sufficient conditions to identify the causal effect of predictions under some assumptions. The relation to the literature on spillover effect/social network analysis is also interesting. To me, the major weakness to me is that the paper seems to make heavy assumptions in order to have interesting and clean results. For example, the whole paper is built on assuming a particular causal structure (Figure 1), and the definition of the extrapolation error allows a clean separation between the influence of X and Y ^ , and in section 3.3, the author assumes the effect of X on Y and Y ^ is separable as well. Thus it wasn't clear to me how likely this work can be extended to more complicated settings. The authors provide some empirical justification, but I would like to see some theoretical insights on how the results would change/not hold without certain assumptions, and I believe it will greatly strengthen the paper. Questions Can the author provide some insights on why overlapping is easier to achieve if | T | is small? In general, can we boost the overlapping by including more classifiers? If the underlying causal model is more complicated (e.g. have more variables X 1 , X 2 . . or more complicated causal relationships among each other), what will be a good way to ensure the identifiability of the causal structure? Limitations Please see the strength and weaknesses section.
NIPS
Title Anticipating Performativity by Predicting from Predictions Abstract Predictions about people, such as their expected educational achievement or their credit risk, can be performative and shape the outcome that they are designed to predict. Understanding the causal effect of predictions on the eventual outcomes is crucial for foreseeing the implications of future predictive models and selecting which models to deploy. However, this causal estimation task poses unique challenges: model predictions are usually deterministic functions of input features and highly correlated with outcomes. This can make the causal effect of predictions on outcomes impossible to disentangle from the direct effect of the covariates. We study this problem through the lens of causal identifiability. Despite the hardness of this problem in full generality, we highlight three natural scenarios where the causal effect of predictions can be identified from observational data: randomization in predictions, overparameterization of the predictive model deployed during data collection, and discrete prediction outputs. Empirically we show that given our identifiability conditions hold, standard variants of supervised learning that predict from predictions by treating the prediction as an input feature can find transferable functional relationships that allow for conclusions about newly deployed predictive models. These positive results fundamentally rely on model predictions being recorded during data collection, bringing forward the importance of rethinking standard data collection practices to enable progress towards a better understanding of social outcomes and performative feedback loops. 1 Introduction Predictions can impact sentiments, alter expectations, inform actions, and thus change the course of events. Through their influence on people, predictions have the potential to change the regularities in the population they seek to describe and understand. This insight underlies the theories of performativity [38] and reflexivity [62] that play an important role in modern economics and finance. Recently, Perdomo et al. [51] pointed out that the social theory of performativity has important implications for machine learning theory and practice. Prevailing approaches to supervised learning assume that features X and labels Y are sampled jointly from a fixed underlying data distribution that is unaffected by attempts to predict Y from X . Performativity questions this assumption and suggests that the deployment of a predictive model can disrupt the relationship between X and Y . Hence, changes to the predictive model can induce shifts in the data distribution. For example, consider a lender with a predictive model for risk of default – performativity could arise if individuals who are predicted as likely to default are given higher interest loans, which make default even more likely [41], akin to a self-fulfilling prophecy. In turn, a different predictive model that predicts smaller risk and suggests offering more low-interest loans could cause some individuals who previously looked risky 36th Conference on Neural Information Processing Systems (NeurIPS 2022). to be able to pay the loans back, which would appear as a shift in the relationship between features X and loan repayment outcomes Y . This performative nature of predictions poses a challenge to using historical data to predict the outcomes that will arise under the deployment of future models. 1.1 Our work In this work, we aim to understand under what conditions observational data is sufficient to identify the performative effects of predictions. Only when causal identifiability is established can we rely on data-driven strategies to anticipate performativity and reason about the downstream consequences of deploying new models. Towards this goal, we focus on a subclass of performative prediction problems in this paper where performative effects of predictions solely surface as a shift in the outcome variable, and the distribution over covariates X is unaffected by the prediction Ŷ . Our goal is to identify the expected counterfactual outcome MY (x, ŷ) , E[Y |X = x, do(Ŷ = ŷ)]. Understanding the causal mechanismMY is crucial for model evaluation, as well as model optimization. In particular, it allows for offline evaluation of the potential outcome Y of an individual X subject to a predictive model fnew with the prediction Ŷ = fnew(X) before actually deploying it. The need for observing predictions. We start by illustrating the hardness of performativity-agnostic learning by relating performative prediction to a concept shift problem. Using the specifics of the performative shift, we establish a lower bound on the extrapolation error of predicting Y from X under the deployment of a new model fnew that is different from the model ftrain deployed during data collection. In particular, the extrapolation error grows with the distance between the prediction functions of the two models and the strength of performativity. This lower bound on the extrapolation error demonstrates the necessity to take performativity into account for reliably predicting Y . Predicting from predictions. We then explore the feasibility of learning performative effects when the training data recorded the predictions and training data samples (X,Y, Ŷ ) are available. As an identification strategy for learningMY , we focus on building a meta machine learning model that predicts Y for an individual with features X , subjected to a prediction Ŷ . We term this data-driven strategy predicting from predictions; it treats the predictions as an input to the meta machine learning model. The meta model seeks to answer “what would the outcome be if we were to deploy a different prediction model?” Crucially, this “what if” question is causal in nature; it aims to understand the potential outcome under an intervention which is different from merely estimating the outcome variable in previously seen data. Whether such a transferable model is learnable depends on whether the training data provides causal identifiability [49] Only after causal identifiability is established can we rely on observational data to select and design optimal prediction models under performativity. Establishing identifiability. For our main technical results, we first show that, in general, observing Ŷ is not sufficient for identifying the causal effects of predictions. In particular, if the training data was collected under the deployment of a deterministic prediction function, the mechanismMY can not be uniquely identified. The reason is a lack of coverage in the training data as X and Ŷ are deterministically bound. Next, we establish several conditions under which observing Ŷ is sufficient for identifying MY . The first condition exploits the presence of randomness in the prediction. This randomness could be purposely built into the prediction for individual fairness, differential privacy, or other considerations. The second condition exploits the property that predictive models are often over-parameterized, which leads to incongruence in functional complexity between different causal paths, enabling the effects of predictions to be separated from other variables’ effects. The third condition takes advantage of discreteness in predictions such that performative effects can be disentangled from the continuous relationship between covariates and outcomes. Together, these results reveal that particularities of the performative prediction problem can enable us to recover the causal effect of predictions from observational data. In particular, we show that, under these conditions, standard supervised learning techniques can be used to find these transferable functional relationships by treating predictions as model inputs. Empirically, we demonstrate that supervised learning succeeds in findingMY even in finite samples. We conclude with a discussion of limitations and extensions of our work, pointing out potential violations of the modeling assumptions underlying our causal analysis and proposing directions for future work. 1.2 Broader context and related work The work by Perdomo et al. [51], initiated the discourse of performativity in the context of supervised learning by pointing out that the deployment of a predictive model can impact the data distribution we train our models on. Existing scholarship on performative prediction [c.f., 51, 42, 12, 44, 24, 26, 68, 45, 52, 31] has predominantly focused on achieving a particular solution concept with a prediction function that maps X to Y in the presence of unknown performative effects. We are interested in understanding the underlying causal mechanism of the performative distribution shift. Our work is motivated by the seemingly natural approach of lifting the supervised-learning problem and incorporating the prediction as an input feature when building a meta machine learning model for explaining Y . By establishing a connection to causal identifiability, our goal is to understand when such a data-driven strategy can help anticipate the down stream effects of predictions This work focuses on the setting where predictions lead to changes in the relationship between covariates X and label Y , while the marginal distribution P (X) over covariates is assumed to be fixed. This setting where performativity only surfaces in the label describes an interesting subclass of problems falling under the umbrella of performative (aka. model-induced or decision-dependent) distribution shifts [51, 37, 12]. Our assumptions are complementary to the strategic classification framework [8, 20] that focuses on a setting where performative effects concern P (X), while P (Y |X) is assumed to remain stable. Consequently, causal questions in strategic classification [e.g., 22, 3, 59] are concerned with identifying stable causal relationships between X and Y . Since we assume P (Y |X) can change (i.e. the true underlying ’concept’ determining outcomes can change), conceptually different questions emerge in our work. Similar in spirit to strategic classification, the work on algorithmic recourse and counterfactual explanations [32, 28, 65] focuses on the causal link between features and predictions, whereas we focus on the down-stream effects of predictions. There are interesting parallels between our work and related work on the offline evaluation of online policies [e.g., 35, 63, 36, 58]. In particular, [63] explicitly emphasize the importance of logging propensities of the deployed policy during data collection to be able to mitigate selection bias. In our work the deployed model can induce a concept shift. Thus, we find that additional information about the predictions of the deployed model needs to be recorded to be able to foresee the impact of a new predictive model on the conditional distribution P (Y |X), beyond enabling propensity weighting [55]. A notable work by [66] investigates how predictions at one time step impact predictions in future time steps. Complementary to these existing works we show that randomness in the predictive model is not the only way causal effects of predictions can be identified. For our theoretical results, we build on classical tools from causal inference [48, 57, 64]. In particular, we distill unique properties of the performative prediction problem to design assumptions for the identifiability of the causal effect of predictions. 2 The causal force of prediction Predictions can be performative and impact the population of individuals they aim to predict. Formulized it in the language of causal inference [48]: the deployment of a predictive model represents an intervention on a causal diagram that describes the underlying data generation process of the population. We will expand on this causal perspective to study an instance of ths performative prediction problem described below. 2.1 Prediction as a partial mediator Consider a machine learning application relying on a predictive model f that maps features X to a predicted label Ŷ . We assume the predictive model f is performative in that the prediction Ŷ = f(X) has a direct causal effect on the outcome variable Y of the individual it concerns. Thereby the prediction impacts how the outcome variable Y is generated from the features X . The causal diagram illustrating this setting is visualized in Figure 1. The features X ∈ X ⊆ Rd are drawn i.i.d. from a fixed underlying continuous distribution over covariates DX with support X . The outcome Y ∈ Y ⊆ R is a function of X , partially mediated by the prediction Ŷ ∈ Y . The prediction Ŷ is determined by the deployed predictive model f : X → Y . For a given prediction function f , every individual is assumed to be sampled i.i.d. from the data generation process described by the causal graph in Figure 1. We assume the exogenous noise ξY is zero mean, and ξf allows the prediction function to be randomized. Note that our model is not meant to describe performativity in its full generality (which includes other ways f may affect P (X,Y )). Rather, it describes an important and practically relevant class of performative feedback problems that are characterized by two properties: 1) performativity surfaces only in the label Y , and 2) performative effects are mediated by the prediction, such that Y ⊥ f | Ŷ , rather than dependent on the specifics of the decision rule. Application examples. Causal effects of predictions on outcomes have been documented in multiple contexts: A bank’s prediction about the client (e.g., his or her creditworthiness in applying for a loan) determines the interest rate assigned to them, which in turn changes a client’s financial situation [41]. Mathematical models that predict stock prices inform the actions of traders and thus heavily shape financial markets and economic realities [38]. Zillow’s housing price predictions directly impact sales prices [39]. Predictions about the severity of an illness play an important role in treatment decisions and hence the very chance of survival of the patient [34]. Another prominent example from psychology is the Pygmalion effect [56]. It refers to the phenomenon that high expectations lead to improved performance, which is widely documented in the context of education [6], sports [61], and organizations [16]. Examples of such performativity abound, and we hope to have convinced the reader that the performative effects in the label are important for algorithmic prediction. 2.2 Implications for performativity-agnostic learning Begin with considering the classical supervised learning task where Ŷ is unobserved. The goal is to learn a model h : X → Y for predicting the label Y from the features X . To understand the inherent challenge of classical prediction under performativity, we investigate the relationship between X and Y more closely. Specifically, the data generation process (Figure 1) implies that P (Y |X) = ∫ P (Y |Ŷ , X)P (Ŷ |X)dŶ . (4) This expression makes explicit how the relationship between X and Y that we aim to learn depends on the predictive model governing P (Ŷ |X). As a consequence, when the deployed predictive model at test time differs from the model at training time, performative effects surface as concept shift [17]. Such distribution shift problems are known to be intractable without structural knowledge about the shift, implying that we can not expect h to generalize to distributions induced by future model deployments. Let us inspect the resulting extrapolation gap in more detail and put existing positive results on performative prediction into perspective. Extrapolation loss. We illustrate the effect of performativity on predictive performance using a simple instantiation of the structural causal model from Figure 1. Therefore, assume a linear performative effect of strength α > 0 and a base function g1 : X → Y g(X, Ŷ ) := g1(X) + αŶ . (5) Now, assume we collect training data under the deployment of a predictive model fθ and validate our model under the deployment of fφ. We adopt the notion of a distribution map from Perdomo et al. [51] and write DXY (f) for the joint distribution over (X,Y ) surfacing from the deployment of a model f . We assess the quality of our predictive model h : X → Y over a distribution DXY (f) induced by f via the loss function ` : Y × Y → R and write Rf (h) := Ex,y∼DXY (f)`(h(x), y) for the risk of h on the distribution induced by f . We use h∗f for the risk minimizer h ∗ f := argminh∈HRf (h), and H for the hypothesis class we optimize over. Proposition 1 bounds the extrapolation loss and can be viewed as a concrete instantiation of the more general extrapolation bounds for performative prediction discussed in [37] within the feedback model from Figure 1. Proposition 1 (Hardness of performativity-agnostic prediction). Consider the data generation process in Figure 1 with g given in (5) and fθ, fφ being deterministic functions. Take a loss function ` : Y × Y → R that is γ-smooth and µ-strongly convex in its second argument. Let h∗fθ be the risk minimizer over the training distribution and assume the problem is realizable, i.e., h∗fθ ∈ H. Then, we can bound the extrapolation loss of h∗fθ on the distribution induced by fφ as γ 2 α2d2DX (fθ, fφ) ≥ ∆Rfθ→fφ(h ∗ fθ ) ≥ µ 2 α2d2DX (fθ, fφ) (6) where d2DX (fθ, fφ) := Ex∼DX (fθ(x)− fφ(x)) 2 and ∆Rfθ→fφ(h) := Rfφ(h)− Rfθ (h). The extrapolation loss ∆Rfθ→fφ(h ∗ fθ ) is zero if and only if either the strength of performativity tends to zero (α → 0), or the predictions of the two predictors fθ and fφ are identical over the support of DX . If this is not the case, an extrapolation gap is inevitable. This elucidates the fundamental hardness of performative prediction from feature, label pairs (X,Y ) when performative effects disrupt the causal relationship between X and Y . The special case where α = 0 aligns with the assumption of classical supervised learning, in which there is no performativity. This may hold in practice if the predictive model is solely used for descriptive purposes, or if the agent making the prediction does not enjoy any economic power [21]. The second special case where the extrapolation error is small is when d2DX (fθ, fφ)→ 0. In which case DXY (fθ) and DXY (fφ) are equal in distribution and hence exhibit the same risk minimizer. Such a scenario can happen, for example, if the model fφ is obtained by retraining fθ on observational data and a fixpoint is reached (fθ = h∗fθ ). The convergence of policy optimization strategies to such fixpoints (perfromative stablity) has been studied in prior work [e.g., 51, 42, 12] and enabled optimality results even in the presence of performative concept shifts, relying on the target model fφ not being chosen arbitrarily, but based on a pre-specified update strategy. 3 Identifying the causal effect of prediction Having illustrated the hardness of performativity-agnostic learning, we explore under what conditions incorporating the presence of performative predictions into the learning task enables us to anticipate the perfromative effects of Ŷ on Y . Towards this goal, we assume that the mediator Ŷ in Figure 1 is observed—the prediction takes on the role of the treatment in our causal analysis and we can not possibly hope to estimate the treatment effect of a treatment that is unobserved. 3.1 Problem setup Assume we are given access to data points (x, ŷ, y) generated i.i.d. from the structural causal model in Figure 1 under the deployment of a prediction function fθ. From this observational data, we wish to estimate the expected potential outcome of an individual under the deployment of an unseen (but known) predictive model fφ. We note that given our causal graph, the implication of intervening on the function f can equivalently be explained by an intervention on the prediction Ŷ . Thus, we are interested in identifying the causal mechanism: MY (x, ŷ) := E[Y |X = x,do(Ŷ = ŷ)]. (7) Unlike P (Y |X), the mecahnismMY is invariant to the changes in the predictive model governing P (Ŷ |X). Thus, being able to identifyMY will allow us to make inferences about the potential outcome surfacing from planned model updates beyond explaining patterns in historical data. We can evaluateMY to infer y for any x at ŷ = fφ(x) for fφ being the model of interest. For simplicity of notation, we will write D(fθ) to denote the joint distribution over (X, Ŷ , Y ) of the observed data collected under the deployment of the predictive model fθ. We sayMY can be identified, if it can uniquely be expressed as a function of observed data. More formally: Definition 1 (identifiability). Given a predictive model f , the causal graph in Figure 1, and a set of assumptions A. We sayMY is identifiable from D(f), if for any function h that complies with assumptions A and h(x, ŷ) = MY (x, ŷ) for pairs (x, ŷ) ∈ supp(DXY (f)) it must also hold that h(x, ŷ) =MY (x, ŷ) for all pairs (x, ŷ) ∈ X × Y . Without causal identifiability, there might be models h′ 6=MY that explain the training distribution equally well but do not transfer to the distribution induced by the deployment of a new model. Causal identifiability is crucial for enabling extrapolation. It quantifies the limits of what we can infer given access to the training data distribution, ignoring finite sample considerations. Identification with supervised learning. Identifiability ofMY from samples of D(fθ) implies that the historical data collected under the deployment of fθ contains sufficient information to recover the invariant relationship (7). As a concrete identification strategy, consider the following standard variant of supervised learning that takes in samples (x, ŷ, y) and builds a meta-model that predicts Y from X, Ŷ by solving the following risk minimization problem hSL := argmin h∈H E(x,ŷ,y)∼D(fθ) [ (h(x, ŷ)− y)2 ] . (8) whereH denotes the hypothesis class. We consider the squared loss for risk minimization because it pairs well with the exogeneous noise ξY in (3) being additive and zero mean. The strategy (8) is an instance of what we term predicting from predictions. Lemma 2 provides a sufficient condition for the supervised learning solution hSL to recover the invariant causal quantityMY . Lemma 2 (Identification strategy). Consider the data generation process in Figure 1 and a set of assumptions A. Given a hypothesis classH such that every h ∈ H complies with A and the problem is realizable, i.e., MY ∈ H. Then, if MY is causally identifiable from D(fθ) given A, the risk minimizer hSL in (8) will coincide withMY . 3.2 Challenges for identifiability The main challenge for identification ofMY from data is that in general, the prediction rule fθ which produces Ŷ is a deterministic function of the covariates X . This means that, for any realization of X , we only get access to one Ŷ = fθ(X) in the training distribution, which makes it challenging to disentangle the direct and the indirect effects of X on Y . To illustrate this challenge, consider the function h(x, ŷ) :=MY (x, fθ(x)) that ignores the input parameter ŷ and only relies on x for explaining the outcome. This function explains y equally well and can not be differentiated from MY based on data collected under the deployment of a deterministic prediction rule fθ. The problem is akin to fitting a linear regression model to two perfectly correlated covariates. More broadly, this ambiguity is due to what is known as a lack of overlap (or lack of positivity) in the literature of causal inference [47, 23]. In the covariate shift literature, the lack of overlap surfaces when the covariate distribution violates the common support assumption and the propensity scores are not well-defined (see e.g., Pan and Yang [46]). This problem renders causal identification and thus data-driven learning of performative effects from deterministic predictions fundamentally challenging. Proposition 3 (Nonidentifiability from deterministic predictions). Consider the structural causal model in Figure 1. Assume Y non-trivially depends on Ŷ , and the set Y is not a singleton. Then, given a deterministic prediction function f , the mechanismMY is not identifiable from D(f). The identifiability issue persists as long as the two variables X , Ŷ are deterministically bound and there is no incongruence or hidden structure that can be exploited to disentangle the direct effect of X on Y from the indirect effect mediated by Ŷ . In the following, we focus on particularities of prediction problems and show how they allow us to identifyMY . 3.3 Identifiability from randomization We start with the most natural setting that provides identifiability guarantees: randomness in the prediction function fθ. Using standard arguments about overlap [47] we can identifyMY (x, ŷ) for any pair x, ŷ with positive probability in the data distribution D(fθ) from which the training data is sampled. To relate this to our goal of identifying the outcome under the deployment of an unseen model fφ we introduce the following definition: Definition 2 (output overlap). Given two predictive models fθ, fφ, the model fφ is said to satisfy output overlap with fθ, if for all x ∈ X and any subset Y ′ ⊆ Y with positive measure, it holds that P[fφ(x) ∈ Y ′] P[fθ(x) ∈ Y ′] > 0. (9) In particular, output overlap requires the support of the new model’s predictions fφ(x) to be contained in the support of fθ(x) for every potential x ∈ X . The following proposition takes advantage of the fact that the joint distribution over (X,Y ) is fully determined by the deployed model’s predictions to relate output overlap to identification: Proposition 4. Given the causal graph in Figure 1, the mechanismMY (x, ŷ) is identifiable from D(fθ) for any pair x, ŷ with ŷ = fφ(x), as long as fφ is a prediction function that satisfies output overlap with fθ. Proposition 4 allows us to pinpoint the models fφ to which we can extrapolate to from data collected under fθ. Furthermore, it makes explicit that for collecting data to learn about performative effects, it is ideal to deploy a predictor fθ that is randomized so that the prediction output has full support over Y for any x. Such a model would generate a dataset that guarantees global identification ofMY over X × Y and thus robust conclusions about any future deployable model fφ. One interesting and relevant setting that satisfies this property is the differentially private release of predictions through an additive Laplace (or Gaussian) noise mechanism applied to the output of the prediction function [13].1 While standard in the literature, a caveat of identification from randomization is that there are several reasons a decision-maker may choose not to deploy a randomized prediction function in performative environments, including negative externalities and concerns about user welfare [29], but also business interests to preserve consumer value of the prediction-based service offered. In the context of our credit scoring example, random predictions would imply that interest rates are randomly assigned to applicants in order to learn how the rates impact their probability of paying back. We can not presently observe this scenario, given regulatory requirements for lending institutions. 3.4 Identifiability through overparameterization The following two sections consider situations where we can achieve identification, without randomization, from data collected under a deterministic fθ. Our first result exploits incongruences in functional complexity arising from machine learning models that are overparameterized [e.g. 30]. By overparameterization, we refer to the fact that the representational complexity of the model is larger than the underlying concept it needs to describe. Assumption 1 (overparameterization). We say a function f is overparameterized with respect to G over X if there is no function g′ ∈ G and c ∈ R such that f(x) = c · g′(x) for all x ∈ X . A challenge for identification is that for deterministic fθ the prediction can be reconstructed from X without relying on Ŷ , and thus h(x, ŷ) =MY (x, fθ(x)) can not be differentiated fromMY based on observational data. However, note that this ambiguity relies on there being an admissable h such that h(·, ŷ) for a fixed ŷ can represent fθ. If fθ is overparameterized with respect to the hypothesis classH, this ambiguity is resolved. Let us make this intuition concrete with an example: Example 3.1. Assume the structural equation for y in Figure 1 is g(x, ŷ) = αx + βŷ for some unknown α, β. Consider prediction functions fθ of the following form fθ(x) = γx2 + ξx for some γ, ξ ≥ 0. ConsiderH be the class of linear functions. Then, any admissable estimate h ∈ H takes the form h(x, ŷ) = α′x+ β′ŷ. For h to be consistent with observations we need α′ + β′ξ = α+ βξ and β′γ = βγ. This system of equations has a unique solution as long as γ > 0 which corresponds to the case where fθ is overparameterized with respect to H. In contrast, for γ = 0 the function h(x, ŷ) = (α+ βξ)x would explain the training data equally well. The following result generalizes this argument to separable functions. Proposition 5. Consider the structural causal model in Figure 1 where fθ is a deterministic function. Assume that g can be decomposed as g(X, Ŷ ) = g1(X) + αŶ for some α > 0 and g1 ∈ G, where the function class G is closed under addition (i.e. g1, g2 ∈ G ⇒ a1 · g1 + a2 · g2 ∈ G ∀a1, a2 ∈ R). Let H contain functions that are separable in X and Ŷ , linear in Ŷ , and ∀h ∈ H it holds that h(·, ŷ) ∈ G for a fixed ŷ. Then, if fθ is overparameterized with respect to G over the support of DX , MY is identifiable from D(fθ). 3.5 Identifiability from classification A second ubiquitous source of incongruence that we can exploit for identification is the discrete nature of predictions in the context of classification. The resulting discontinuity in the relationship between X and Ŷ enables us to disentangleMY from the direct effect of X on Y . This identification strategy is akin to the popular regression discontinuity design [33] and relies on the assumption that all other variables in X are continuously related to Y around the discontinuities in Ŷ . 1In Appendix B we discuss two additional natural sources of randomness (randomized decisions and noisy measurements of covariates) that can potentially help identification with appropriate side-information. Proposition 6. Consider the structural causal model in Figure 1 where fθ is a deterministic function. Assume that the structural equation for Y is separable g(X, Ŷ ) = g1(X) + g2(Ŷ ),∀X, Ŷ for some differentiable functions g1 and g2. Further, suppose X is a continuous random variable and Ŷ is a discrete random variable that takes on at least two distinct values with non-zero probability. Then, MY is identifiable from D(fθ). Similar to Proposition 5, the separability assumption together with incongruence provides a way to disentangle the direct effect from the indirect effect of X on Y . Separability is necessary in order to achieve global identification guarantees without randomness, the identification of entangled components without overlap is fundamentally hard. Thus, under violations of the separability assumptions, we can only expect the separable components of g to be correctly identified. Similarly, a regression discontinuity design only enables the identification of the causal effect locally around the discontinuity. Extrapolation away from the decision boundary to models fφ that are substantially different from fθ increasingly relies on separability to hold true. 4 Empirical evaluation We investigate empirically how well the supervised learning solution hSL in (8) is able to identify the causal mechanismMY from observational data in practical settings with finite data. Methodology. We generated semi-synthetic data for our experiments, using a Census income prediction dataset from folktables.org [11]. Using this dataset as a starting point, we simulate a training dataset and test dataset with distribution shift as follows: First, we choose two different predictors fθ and fφ to predict a target variable of interest (e.g. income) from covariates X (e.g. age, occupation, education, etc.). If not specified otherwise, fθ is fit to the original dataset to minimize squared error, while fφ is trained on randomly shuffled labels. Next, we posit a function g for simulating the performative effects. Then, we generate a training dataset of (X, Ŷ , Y ) tuples from the causal model in Figure 1, using the covariates X from the original data, g, and fθ to generate Ŷ and Y . Similarly, we generate a test dataset of (X, Ŷ , Y ) tuples, using X, g, fφ. We assess how well supervised methods learn transferable functional relationships by fitting a model hSL to the training dataset and then evaluating the root mean squared error (RMSE) for regression and the accuracy for classification on the test dataset. In our figures, we visualize the standard error from 10 replicates with different random seeds and we compare it to an in-distribution baseline trained and evaluated on samples of D(fφ). If not specified otherwise we use N = 200, 000 samples. 4.1 Necessity of identification guarantees for supervised learning We start by illustrating why our identification guarantees are crucial for supervised learning under performativity. Therefore, we instantiate the structural equation g in Figure 1 as g(X, Ŷ ) = g1(X) + αŶ (10) with g1(X) = β>X and ξY ∼ N (0, 1). The coefficients β are determined by linear regression on the original dataset. The hyperparameter α quantifies the performativity strength that we vary in our experiments. The predictions Ŷ are generated from a linear model fθ that we modify to illustrate the resulting impact on identifiability. We optimize hSL in (8) overH being the class of linear functions. We start by illustrating a failure mode of supervised learning in a non-identifiability setting (Proposition 3). Therefore, we let fθ be a deterministic linear model fit to the base dataset (fθ(X) ≈ β>X). This results inMY not being identifiable from D(fθ). In Figure 2(a) we can see that supervised learning indeed struggles to identify a transferable functional relationship from the training data. The meta model returns hSL(X, Ŷ ) = (1 + α)Ŷ , instead of identifying g, which leads to a high extrapolation error independent of the strength of performativity. While we only show the error for one fφ in Figure 2(a), the error grows with the distance d2Dx(fθ, fφ). In contrast, when the feature Ŷ is not included, the supervised learning strategy returns hSL(X) = (1 + α)β>X . The extrapolation loss of this performativity-agnostic model scales with the strength of performativity (Proposition 1) and is thus strictly smaller than the error of the model that predicts from predictions. Next, we move to the regime of our identification results (Proposition 4-6). Therefore, we modify the way the predictions in the training data are generated. In Figure 2(b) we use additive Gaussian noise to determine the predictions as Ŷ = fθ(X) + η with η ∈ N (0, σ2). In Figure 2(c) we augment the input to fθ with second-degree polynomial features to achieve overparameterization. In Figure 2(d) we round the predictions of fθ to obtain discrete values. In all three cases, including Ŷ as a feature is beneficial and allows the model to match in-distribution accuracy baselines, closing the extrapolation gap that is inevitable for performativity-agnostic prediction. 4.2 Strength of incongruence and finite samples We next conduct an ablation study and investigate how the degree of overparameterization and the noise level for randomized fθ impacts the extrapolation performance of supervised learning. Therefore, we consider the setup in (10) with a general function g1. We fix the level of performativity at α = 0.5 for this experiment. We optimize hSL in (8) overH (which we vary). In Figure 3(a) we investigate the effect of overparameterization of fθ on the extrapolation error of hSL. We choose fully connected neural networks with a single hidden layer to represent the functions g1, fθ and hSL. For g1 andH we take a neural network with m = 3 units in the hidden layer. The model g1 is fit it to the original dataset. We vary the number of units in the hidden layer of fθ, denoted mθ. As expected, the extrapolation error decreases with the complexity of fθ. As soon as mθ > mφ there is a significant benefit to including predictions as features. In this regime,MY becomes identifiable as Proposition 5 suggests. In turn, without access to Ŷ the model suffers an inevitable extrapolation gap due to a concept shift that is independent of the properties of fθ. In Figure 2(b) we investigate the effect of the magnitude of additive noise added to the predictions. HereH and g1 are linear functions. We have Ŷ = fθ(X) + βη with η ∈ N (0, 1) and we vary the noise level β. We see that even small amounts of noise are sufficient for identification and adding Ŷ as a feature to our meta-machine lenaring model is effective as soon as the noise in fθ is non-zero. In Figure 2(c) we fix the noise level at α = 0.5 and vary the number of samples N . We find that only moderate dataset sizes are necessary for predicting from predictions to approximateMY in our identifiable settings. 5 Discussion This paper focused on identifying the causal effect of predictions on outcomes from observational data. We point out several natural situations where this causal question can be answered, but we also highlight situations where observational data is not sufficiently informative to reason about performative effects. By establishing a connection between causal identifiability and the feasibility of anticipating performative effects using data-driven techniques, this paper contributes to a better understanding of the suitability of supervised learning techniques for explaining social effects arising from the deployment of predictive models in economically and socially relevant applications. We hope the positive results in this work serve as a message for data-collection: only if predictions are observed, they can be incorporated to anticipate the performative effects of future model deployments. Thus, access to this information is crucial for an analyst hoping to understand the effects of deployed predictive models, an engineer hoping to foresee consequences of model updates, or a researcher studying performative phenomena. To date, such data is scarcely available in benchmark datasets, hindering the progress towards a better understanding of performative effects, essential for the reliable deployment of algorithmic systems in the social world. At the same time we have shown that the deterministic nature of prediction poses unique challenges for causal identifiability even if Ŷ is observed. Thus, the success of observational designs (as shown in our empirical investigations) is closely tied to the corresponding identifiability conditions being satisfied. Our results must not be understood as a green-light to justify the use of supervised learning techniques to address performativity in full generality beyond the scope of our theoretical results. Limitations and Extensions. The central assumption of our work is the causal model in Figure 1. While carving out a rich and interesting class of performative prediction problems that allows us to articulate the challenges of covariates and predictions being coupled, it can not account for all mechanisms of performativity. This in turn gives rise to interesting questions for follow-up studies. A first neglected aspect is performativity through social influence. Our causal model, relies on the stable unit treatment value assumption (SUTVA) [23]. There is no possibility for the prediction of one individual to impact the outcome of his or her peers. Such an individualistic perspective is not unique to our paper but prevalent in existing causal analyses and model-based approaches to performative prediction and strategic classification [e.g., 20, 25, 43, 3, 18, 22]. Spillover effects [cf. 60, 64, 1, 40] are yet unexplored in the context of performative prediction. Nevertheless, they have important implications for how causal effects should be estimated and interpreted. In the context of our work they imply that an intervention on f can no longer be explaind solely by changing an individual’s prediction. As a result, approaches for microfounding performative effect based on models learned from simple, unilateral interventions on an individual’s prediction result in different causal estimates than supervised learning based methods for identification as studied in this work. A preliminary study included in Appendix C shows that data-driven techniques can pick up on interference patterns in the data and benefit from structural properties such as network homophily [19], whereas individualistic modeling misses out on the indirect component arising from neighbors influencing each other. A second aspect is performativity in non-causal prediction. Our model posits that prediction is solely based on features X that are causal for the outcome Y . This is a desirable situation in many practical applications because causal predictions disincentivize gaming of strategic individuals manipulating their features [43, 3] and offer explanations for the outcome that persist across environments [54, 7]. Nevertheless, non-causal variables are often included as input features in practical machine learning prediction tasks. Establishing a better understanding for the implications of the resulting causal dependencies due to performativity could be an important direction for future work. Finally, performative effect can also lead to covariate shift and impact the joint distribution P (X,Y ) = P (Y |X)P (X) over covariates and labels. We assumed that performative effects only surface in P (Y |X). For our theoretical results, this implied that overlap in the X variable across environments is trivially satisfied, which enabled us to pinpoint the challenges of learning performative effects due to the coupling between X and Ŷ . For establishing identification in the presence of a causal arrow fθ → X additional steps are required to ensure identifiability. Acknowledgement The authors would like to thank Moritz Hardt and Lydia Liu for many helpful discussions throughout the development of this project, Tijana Zrnic, Krikamol Muandet, Jacob Steinhardt, Meena Jagadeesan and Juan Perdomo for feedback on the manuscript, and Gary Cheng for helpful discussions on differential privacy. We are also grateful for a constructive discourse and valuable feedback provided by the reviewers that greatly helped improve the manuscript. 6 Paper checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [Yes] (c) Did you discuss any potential negative societal impacts of your work? [Yes] see Appendix F. (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] (b) Did you include complete proofs of all theoretical results? [Yes] 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experi- mental results (either in the supplemental material or as a URL)? [Yes] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] (c) Did you report error bars (e.g., with respect to the random seed after running experi- ments multiple times)? [Yes] (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [Yes] (c) Did you include any new assets either in the supplemental material or as a URL? [Yes] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [Yes] (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [Yes] 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What are the strengths and weaknesses of the paper regarding its presentation, mathematical notation, and assumptions? 2. How does the paper contribute to the problem of "predicting from predictions," and what are the limitations of its approach? 3. Can the authors provide more context or references to existing literature on similar topics, such as over-parameterization, OoD generalization, and learning under distribution shift? 4. How do the different propositions in the theoretical analysis relate to each other, and how do they combine into an understanding of the "bigger picture"? 5. What is the connection between the paper's findings and real-world applications, such as logging the state of deployed predictors for supervised learning? 6. How might the paper's results extend to the finite-sample setting, and what changes could be expected when the dataset size is finite? 7. Are there any minor typos or errors in the paper that should be addressed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work investigates performative prediction through the lens of causal inference, aiming to identify causal structures which allow for counterfactual estimation of performative effects. Investigation in this paper assumes a specific causal structure, in which the predicted value Y ^ acts as a mediator between the chosen prediction model f θ and the outcome Y , and f θ does not directly affect other variables (Figure 1). For the theoretical analysis, an infinite-sample setting is assumed, where it is assumed that the sample size is infinite. Three sets of theoretical results are presented: The first set result assumes a specific performative structure, and quantifies the regret from naively training a model under performative behavior (Proposition 1,2). The result aims to illustrate the negative effects of neglecting a performative causal structure. For the second set of theoretical results, authors identify “overlap/positivity” as the main limiting factor in performative extrapolation. Authors point out three avenues through which this concern can be alleviated - randomization of model outputs (Proposition 3), noisy measurement of covariates (Proposition 4), and incongruence between the effect of X on Y and the effect of X on Y ^ in separable performative structures (“over-parameterization” in Proposition 5, discrete classification in Proposition 6). For the third set of results, authors initiate investigation of spillover effects between users, identifying a spillover structure G through which identifiability is possible. In the experimental evaluation, authors present results which validate their claims, showing empirically that results extend beyond the theoretical guarantees. Strengths And Weaknesses Strengths Presentation and mathematical notations are very clear. Causal assumptions are made explicit. Identifying “failure modes” and “success modes” of performative prediction is an approach which can establish strong theoretical foundations for this area of research, and help it further establish applicability in practice. The investigation points out avenues of research which may be interesting for further inquiry, such as the relation between model over-parameterization and performative prediction, or the significance of spillover effects. Weaknesses The paper claims to address the general problem of "predicting from predictions", but in practice assumes a very specific causal structure and does not sufficiently establish its applicability. In particular, the paper assumes a causal model in which f θ does not affect X . However, a direct causal path between f θ and X exists in many practical cases, for example in the “Actionable Recourse” setting (Uston et al. 2018). In section 2.1, the authors mention many applications in which a performative structure may be present, but it’s not clear whether they can indeed be approximated using the causal structure presented in Figure 1. The paper presents a collection of interesting insights, but it’s not made clear how they combine into an understanding of the “bigger picture”. Moreover, even though similar problems were investigated in previous literature (e.g “Counterfactual Risk Minimization: Learning from Logged Bandit Feedback” by Swaminathan and Joachims 2015, recent work on over-parameterization and OoD generalization in deep neural nets), these existing results are not discussed in the paper. Theoretical results assume that sample size is infinite, and that predictors are minimizers of the expected risk (i.e h ∗ = arg ⁡ min h ∈ H E [ w ( h ( x ) − y ) 2 ] ) - A quantity which cannot be directly optimized practice. Not clear how the formalism extends to the finite-sample setting. What would change when the dataset size is finite? A few minor typos: Line 263 (bet -> be), line 307 (inequality expression seems incomplete - missing a random variable next to the expectation operator?), line 374 (lesson -> lessons). Questions When do we expect the performative causal structure assumption (Figure 1) to hold? When do we expect it not to hold? What are the implications of making a wrong assumption about the causal structure? What is the relation between the different propositions in the theoretical analysis? How do they combine into an understanding of a “bigger picture”? Are they "complete" in the sense that they cover all possible influence modes? The discussion claims that “one of the most important lessons from this work is that there is high value to logging the state of the deployed predictor when collecting data for the purpose of supervised learning”. I agree with this claim very much, and am wondering if there is a way to support or quantify it. What is the connection to existing work on learning under distribution shift? Literature that comes to mind is: “Counterfactual Risk Minimization: Learning from Logged Bandit Feedback” by Swaminathan and Joachims 2015, “Discriminative Learning Under Covariate Shift”, Bickel et al. 2009 “Actionable Recourse in Linear Classification”, Uston et al. 2018 Literature on online learning and multi-armed bandits (e.g can we think of an “ ε -greedy” strategy as an inducer of identifiability?) Limitations The paper makes a very strong causal assumption, but the current version does not properly contextualize it within the existing literature. I feel that the paper would benefit from discussing its limitations and implications of the causal structure assumption in more depth. Extension of theoretical analysis to the finite-sample case, and the relation to existing machine learning literature on similar topics.
NIPS
Title Anticipating Performativity by Predicting from Predictions Abstract Predictions about people, such as their expected educational achievement or their credit risk, can be performative and shape the outcome that they are designed to predict. Understanding the causal effect of predictions on the eventual outcomes is crucial for foreseeing the implications of future predictive models and selecting which models to deploy. However, this causal estimation task poses unique challenges: model predictions are usually deterministic functions of input features and highly correlated with outcomes. This can make the causal effect of predictions on outcomes impossible to disentangle from the direct effect of the covariates. We study this problem through the lens of causal identifiability. Despite the hardness of this problem in full generality, we highlight three natural scenarios where the causal effect of predictions can be identified from observational data: randomization in predictions, overparameterization of the predictive model deployed during data collection, and discrete prediction outputs. Empirically we show that given our identifiability conditions hold, standard variants of supervised learning that predict from predictions by treating the prediction as an input feature can find transferable functional relationships that allow for conclusions about newly deployed predictive models. These positive results fundamentally rely on model predictions being recorded during data collection, bringing forward the importance of rethinking standard data collection practices to enable progress towards a better understanding of social outcomes and performative feedback loops. 1 Introduction Predictions can impact sentiments, alter expectations, inform actions, and thus change the course of events. Through their influence on people, predictions have the potential to change the regularities in the population they seek to describe and understand. This insight underlies the theories of performativity [38] and reflexivity [62] that play an important role in modern economics and finance. Recently, Perdomo et al. [51] pointed out that the social theory of performativity has important implications for machine learning theory and practice. Prevailing approaches to supervised learning assume that features X and labels Y are sampled jointly from a fixed underlying data distribution that is unaffected by attempts to predict Y from X . Performativity questions this assumption and suggests that the deployment of a predictive model can disrupt the relationship between X and Y . Hence, changes to the predictive model can induce shifts in the data distribution. For example, consider a lender with a predictive model for risk of default – performativity could arise if individuals who are predicted as likely to default are given higher interest loans, which make default even more likely [41], akin to a self-fulfilling prophecy. In turn, a different predictive model that predicts smaller risk and suggests offering more low-interest loans could cause some individuals who previously looked risky 36th Conference on Neural Information Processing Systems (NeurIPS 2022). to be able to pay the loans back, which would appear as a shift in the relationship between features X and loan repayment outcomes Y . This performative nature of predictions poses a challenge to using historical data to predict the outcomes that will arise under the deployment of future models. 1.1 Our work In this work, we aim to understand under what conditions observational data is sufficient to identify the performative effects of predictions. Only when causal identifiability is established can we rely on data-driven strategies to anticipate performativity and reason about the downstream consequences of deploying new models. Towards this goal, we focus on a subclass of performative prediction problems in this paper where performative effects of predictions solely surface as a shift in the outcome variable, and the distribution over covariates X is unaffected by the prediction Ŷ . Our goal is to identify the expected counterfactual outcome MY (x, ŷ) , E[Y |X = x, do(Ŷ = ŷ)]. Understanding the causal mechanismMY is crucial for model evaluation, as well as model optimization. In particular, it allows for offline evaluation of the potential outcome Y of an individual X subject to a predictive model fnew with the prediction Ŷ = fnew(X) before actually deploying it. The need for observing predictions. We start by illustrating the hardness of performativity-agnostic learning by relating performative prediction to a concept shift problem. Using the specifics of the performative shift, we establish a lower bound on the extrapolation error of predicting Y from X under the deployment of a new model fnew that is different from the model ftrain deployed during data collection. In particular, the extrapolation error grows with the distance between the prediction functions of the two models and the strength of performativity. This lower bound on the extrapolation error demonstrates the necessity to take performativity into account for reliably predicting Y . Predicting from predictions. We then explore the feasibility of learning performative effects when the training data recorded the predictions and training data samples (X,Y, Ŷ ) are available. As an identification strategy for learningMY , we focus on building a meta machine learning model that predicts Y for an individual with features X , subjected to a prediction Ŷ . We term this data-driven strategy predicting from predictions; it treats the predictions as an input to the meta machine learning model. The meta model seeks to answer “what would the outcome be if we were to deploy a different prediction model?” Crucially, this “what if” question is causal in nature; it aims to understand the potential outcome under an intervention which is different from merely estimating the outcome variable in previously seen data. Whether such a transferable model is learnable depends on whether the training data provides causal identifiability [49] Only after causal identifiability is established can we rely on observational data to select and design optimal prediction models under performativity. Establishing identifiability. For our main technical results, we first show that, in general, observing Ŷ is not sufficient for identifying the causal effects of predictions. In particular, if the training data was collected under the deployment of a deterministic prediction function, the mechanismMY can not be uniquely identified. The reason is a lack of coverage in the training data as X and Ŷ are deterministically bound. Next, we establish several conditions under which observing Ŷ is sufficient for identifying MY . The first condition exploits the presence of randomness in the prediction. This randomness could be purposely built into the prediction for individual fairness, differential privacy, or other considerations. The second condition exploits the property that predictive models are often over-parameterized, which leads to incongruence in functional complexity between different causal paths, enabling the effects of predictions to be separated from other variables’ effects. The third condition takes advantage of discreteness in predictions such that performative effects can be disentangled from the continuous relationship between covariates and outcomes. Together, these results reveal that particularities of the performative prediction problem can enable us to recover the causal effect of predictions from observational data. In particular, we show that, under these conditions, standard supervised learning techniques can be used to find these transferable functional relationships by treating predictions as model inputs. Empirically, we demonstrate that supervised learning succeeds in findingMY even in finite samples. We conclude with a discussion of limitations and extensions of our work, pointing out potential violations of the modeling assumptions underlying our causal analysis and proposing directions for future work. 1.2 Broader context and related work The work by Perdomo et al. [51], initiated the discourse of performativity in the context of supervised learning by pointing out that the deployment of a predictive model can impact the data distribution we train our models on. Existing scholarship on performative prediction [c.f., 51, 42, 12, 44, 24, 26, 68, 45, 52, 31] has predominantly focused on achieving a particular solution concept with a prediction function that maps X to Y in the presence of unknown performative effects. We are interested in understanding the underlying causal mechanism of the performative distribution shift. Our work is motivated by the seemingly natural approach of lifting the supervised-learning problem and incorporating the prediction as an input feature when building a meta machine learning model for explaining Y . By establishing a connection to causal identifiability, our goal is to understand when such a data-driven strategy can help anticipate the down stream effects of predictions This work focuses on the setting where predictions lead to changes in the relationship between covariates X and label Y , while the marginal distribution P (X) over covariates is assumed to be fixed. This setting where performativity only surfaces in the label describes an interesting subclass of problems falling under the umbrella of performative (aka. model-induced or decision-dependent) distribution shifts [51, 37, 12]. Our assumptions are complementary to the strategic classification framework [8, 20] that focuses on a setting where performative effects concern P (X), while P (Y |X) is assumed to remain stable. Consequently, causal questions in strategic classification [e.g., 22, 3, 59] are concerned with identifying stable causal relationships between X and Y . Since we assume P (Y |X) can change (i.e. the true underlying ’concept’ determining outcomes can change), conceptually different questions emerge in our work. Similar in spirit to strategic classification, the work on algorithmic recourse and counterfactual explanations [32, 28, 65] focuses on the causal link between features and predictions, whereas we focus on the down-stream effects of predictions. There are interesting parallels between our work and related work on the offline evaluation of online policies [e.g., 35, 63, 36, 58]. In particular, [63] explicitly emphasize the importance of logging propensities of the deployed policy during data collection to be able to mitigate selection bias. In our work the deployed model can induce a concept shift. Thus, we find that additional information about the predictions of the deployed model needs to be recorded to be able to foresee the impact of a new predictive model on the conditional distribution P (Y |X), beyond enabling propensity weighting [55]. A notable work by [66] investigates how predictions at one time step impact predictions in future time steps. Complementary to these existing works we show that randomness in the predictive model is not the only way causal effects of predictions can be identified. For our theoretical results, we build on classical tools from causal inference [48, 57, 64]. In particular, we distill unique properties of the performative prediction problem to design assumptions for the identifiability of the causal effect of predictions. 2 The causal force of prediction Predictions can be performative and impact the population of individuals they aim to predict. Formulized it in the language of causal inference [48]: the deployment of a predictive model represents an intervention on a causal diagram that describes the underlying data generation process of the population. We will expand on this causal perspective to study an instance of ths performative prediction problem described below. 2.1 Prediction as a partial mediator Consider a machine learning application relying on a predictive model f that maps features X to a predicted label Ŷ . We assume the predictive model f is performative in that the prediction Ŷ = f(X) has a direct causal effect on the outcome variable Y of the individual it concerns. Thereby the prediction impacts how the outcome variable Y is generated from the features X . The causal diagram illustrating this setting is visualized in Figure 1. The features X ∈ X ⊆ Rd are drawn i.i.d. from a fixed underlying continuous distribution over covariates DX with support X . The outcome Y ∈ Y ⊆ R is a function of X , partially mediated by the prediction Ŷ ∈ Y . The prediction Ŷ is determined by the deployed predictive model f : X → Y . For a given prediction function f , every individual is assumed to be sampled i.i.d. from the data generation process described by the causal graph in Figure 1. We assume the exogenous noise ξY is zero mean, and ξf allows the prediction function to be randomized. Note that our model is not meant to describe performativity in its full generality (which includes other ways f may affect P (X,Y )). Rather, it describes an important and practically relevant class of performative feedback problems that are characterized by two properties: 1) performativity surfaces only in the label Y , and 2) performative effects are mediated by the prediction, such that Y ⊥ f | Ŷ , rather than dependent on the specifics of the decision rule. Application examples. Causal effects of predictions on outcomes have been documented in multiple contexts: A bank’s prediction about the client (e.g., his or her creditworthiness in applying for a loan) determines the interest rate assigned to them, which in turn changes a client’s financial situation [41]. Mathematical models that predict stock prices inform the actions of traders and thus heavily shape financial markets and economic realities [38]. Zillow’s housing price predictions directly impact sales prices [39]. Predictions about the severity of an illness play an important role in treatment decisions and hence the very chance of survival of the patient [34]. Another prominent example from psychology is the Pygmalion effect [56]. It refers to the phenomenon that high expectations lead to improved performance, which is widely documented in the context of education [6], sports [61], and organizations [16]. Examples of such performativity abound, and we hope to have convinced the reader that the performative effects in the label are important for algorithmic prediction. 2.2 Implications for performativity-agnostic learning Begin with considering the classical supervised learning task where Ŷ is unobserved. The goal is to learn a model h : X → Y for predicting the label Y from the features X . To understand the inherent challenge of classical prediction under performativity, we investigate the relationship between X and Y more closely. Specifically, the data generation process (Figure 1) implies that P (Y |X) = ∫ P (Y |Ŷ , X)P (Ŷ |X)dŶ . (4) This expression makes explicit how the relationship between X and Y that we aim to learn depends on the predictive model governing P (Ŷ |X). As a consequence, when the deployed predictive model at test time differs from the model at training time, performative effects surface as concept shift [17]. Such distribution shift problems are known to be intractable without structural knowledge about the shift, implying that we can not expect h to generalize to distributions induced by future model deployments. Let us inspect the resulting extrapolation gap in more detail and put existing positive results on performative prediction into perspective. Extrapolation loss. We illustrate the effect of performativity on predictive performance using a simple instantiation of the structural causal model from Figure 1. Therefore, assume a linear performative effect of strength α > 0 and a base function g1 : X → Y g(X, Ŷ ) := g1(X) + αŶ . (5) Now, assume we collect training data under the deployment of a predictive model fθ and validate our model under the deployment of fφ. We adopt the notion of a distribution map from Perdomo et al. [51] and write DXY (f) for the joint distribution over (X,Y ) surfacing from the deployment of a model f . We assess the quality of our predictive model h : X → Y over a distribution DXY (f) induced by f via the loss function ` : Y × Y → R and write Rf (h) := Ex,y∼DXY (f)`(h(x), y) for the risk of h on the distribution induced by f . We use h∗f for the risk minimizer h ∗ f := argminh∈HRf (h), and H for the hypothesis class we optimize over. Proposition 1 bounds the extrapolation loss and can be viewed as a concrete instantiation of the more general extrapolation bounds for performative prediction discussed in [37] within the feedback model from Figure 1. Proposition 1 (Hardness of performativity-agnostic prediction). Consider the data generation process in Figure 1 with g given in (5) and fθ, fφ being deterministic functions. Take a loss function ` : Y × Y → R that is γ-smooth and µ-strongly convex in its second argument. Let h∗fθ be the risk minimizer over the training distribution and assume the problem is realizable, i.e., h∗fθ ∈ H. Then, we can bound the extrapolation loss of h∗fθ on the distribution induced by fφ as γ 2 α2d2DX (fθ, fφ) ≥ ∆Rfθ→fφ(h ∗ fθ ) ≥ µ 2 α2d2DX (fθ, fφ) (6) where d2DX (fθ, fφ) := Ex∼DX (fθ(x)− fφ(x)) 2 and ∆Rfθ→fφ(h) := Rfφ(h)− Rfθ (h). The extrapolation loss ∆Rfθ→fφ(h ∗ fθ ) is zero if and only if either the strength of performativity tends to zero (α → 0), or the predictions of the two predictors fθ and fφ are identical over the support of DX . If this is not the case, an extrapolation gap is inevitable. This elucidates the fundamental hardness of performative prediction from feature, label pairs (X,Y ) when performative effects disrupt the causal relationship between X and Y . The special case where α = 0 aligns with the assumption of classical supervised learning, in which there is no performativity. This may hold in practice if the predictive model is solely used for descriptive purposes, or if the agent making the prediction does not enjoy any economic power [21]. The second special case where the extrapolation error is small is when d2DX (fθ, fφ)→ 0. In which case DXY (fθ) and DXY (fφ) are equal in distribution and hence exhibit the same risk minimizer. Such a scenario can happen, for example, if the model fφ is obtained by retraining fθ on observational data and a fixpoint is reached (fθ = h∗fθ ). The convergence of policy optimization strategies to such fixpoints (perfromative stablity) has been studied in prior work [e.g., 51, 42, 12] and enabled optimality results even in the presence of performative concept shifts, relying on the target model fφ not being chosen arbitrarily, but based on a pre-specified update strategy. 3 Identifying the causal effect of prediction Having illustrated the hardness of performativity-agnostic learning, we explore under what conditions incorporating the presence of performative predictions into the learning task enables us to anticipate the perfromative effects of Ŷ on Y . Towards this goal, we assume that the mediator Ŷ in Figure 1 is observed—the prediction takes on the role of the treatment in our causal analysis and we can not possibly hope to estimate the treatment effect of a treatment that is unobserved. 3.1 Problem setup Assume we are given access to data points (x, ŷ, y) generated i.i.d. from the structural causal model in Figure 1 under the deployment of a prediction function fθ. From this observational data, we wish to estimate the expected potential outcome of an individual under the deployment of an unseen (but known) predictive model fφ. We note that given our causal graph, the implication of intervening on the function f can equivalently be explained by an intervention on the prediction Ŷ . Thus, we are interested in identifying the causal mechanism: MY (x, ŷ) := E[Y |X = x,do(Ŷ = ŷ)]. (7) Unlike P (Y |X), the mecahnismMY is invariant to the changes in the predictive model governing P (Ŷ |X). Thus, being able to identifyMY will allow us to make inferences about the potential outcome surfacing from planned model updates beyond explaining patterns in historical data. We can evaluateMY to infer y for any x at ŷ = fφ(x) for fφ being the model of interest. For simplicity of notation, we will write D(fθ) to denote the joint distribution over (X, Ŷ , Y ) of the observed data collected under the deployment of the predictive model fθ. We sayMY can be identified, if it can uniquely be expressed as a function of observed data. More formally: Definition 1 (identifiability). Given a predictive model f , the causal graph in Figure 1, and a set of assumptions A. We sayMY is identifiable from D(f), if for any function h that complies with assumptions A and h(x, ŷ) = MY (x, ŷ) for pairs (x, ŷ) ∈ supp(DXY (f)) it must also hold that h(x, ŷ) =MY (x, ŷ) for all pairs (x, ŷ) ∈ X × Y . Without causal identifiability, there might be models h′ 6=MY that explain the training distribution equally well but do not transfer to the distribution induced by the deployment of a new model. Causal identifiability is crucial for enabling extrapolation. It quantifies the limits of what we can infer given access to the training data distribution, ignoring finite sample considerations. Identification with supervised learning. Identifiability ofMY from samples of D(fθ) implies that the historical data collected under the deployment of fθ contains sufficient information to recover the invariant relationship (7). As a concrete identification strategy, consider the following standard variant of supervised learning that takes in samples (x, ŷ, y) and builds a meta-model that predicts Y from X, Ŷ by solving the following risk minimization problem hSL := argmin h∈H E(x,ŷ,y)∼D(fθ) [ (h(x, ŷ)− y)2 ] . (8) whereH denotes the hypothesis class. We consider the squared loss for risk minimization because it pairs well with the exogeneous noise ξY in (3) being additive and zero mean. The strategy (8) is an instance of what we term predicting from predictions. Lemma 2 provides a sufficient condition for the supervised learning solution hSL to recover the invariant causal quantityMY . Lemma 2 (Identification strategy). Consider the data generation process in Figure 1 and a set of assumptions A. Given a hypothesis classH such that every h ∈ H complies with A and the problem is realizable, i.e., MY ∈ H. Then, if MY is causally identifiable from D(fθ) given A, the risk minimizer hSL in (8) will coincide withMY . 3.2 Challenges for identifiability The main challenge for identification ofMY from data is that in general, the prediction rule fθ which produces Ŷ is a deterministic function of the covariates X . This means that, for any realization of X , we only get access to one Ŷ = fθ(X) in the training distribution, which makes it challenging to disentangle the direct and the indirect effects of X on Y . To illustrate this challenge, consider the function h(x, ŷ) :=MY (x, fθ(x)) that ignores the input parameter ŷ and only relies on x for explaining the outcome. This function explains y equally well and can not be differentiated from MY based on data collected under the deployment of a deterministic prediction rule fθ. The problem is akin to fitting a linear regression model to two perfectly correlated covariates. More broadly, this ambiguity is due to what is known as a lack of overlap (or lack of positivity) in the literature of causal inference [47, 23]. In the covariate shift literature, the lack of overlap surfaces when the covariate distribution violates the common support assumption and the propensity scores are not well-defined (see e.g., Pan and Yang [46]). This problem renders causal identification and thus data-driven learning of performative effects from deterministic predictions fundamentally challenging. Proposition 3 (Nonidentifiability from deterministic predictions). Consider the structural causal model in Figure 1. Assume Y non-trivially depends on Ŷ , and the set Y is not a singleton. Then, given a deterministic prediction function f , the mechanismMY is not identifiable from D(f). The identifiability issue persists as long as the two variables X , Ŷ are deterministically bound and there is no incongruence or hidden structure that can be exploited to disentangle the direct effect of X on Y from the indirect effect mediated by Ŷ . In the following, we focus on particularities of prediction problems and show how they allow us to identifyMY . 3.3 Identifiability from randomization We start with the most natural setting that provides identifiability guarantees: randomness in the prediction function fθ. Using standard arguments about overlap [47] we can identifyMY (x, ŷ) for any pair x, ŷ with positive probability in the data distribution D(fθ) from which the training data is sampled. To relate this to our goal of identifying the outcome under the deployment of an unseen model fφ we introduce the following definition: Definition 2 (output overlap). Given two predictive models fθ, fφ, the model fφ is said to satisfy output overlap with fθ, if for all x ∈ X and any subset Y ′ ⊆ Y with positive measure, it holds that P[fφ(x) ∈ Y ′] P[fθ(x) ∈ Y ′] > 0. (9) In particular, output overlap requires the support of the new model’s predictions fφ(x) to be contained in the support of fθ(x) for every potential x ∈ X . The following proposition takes advantage of the fact that the joint distribution over (X,Y ) is fully determined by the deployed model’s predictions to relate output overlap to identification: Proposition 4. Given the causal graph in Figure 1, the mechanismMY (x, ŷ) is identifiable from D(fθ) for any pair x, ŷ with ŷ = fφ(x), as long as fφ is a prediction function that satisfies output overlap with fθ. Proposition 4 allows us to pinpoint the models fφ to which we can extrapolate to from data collected under fθ. Furthermore, it makes explicit that for collecting data to learn about performative effects, it is ideal to deploy a predictor fθ that is randomized so that the prediction output has full support over Y for any x. Such a model would generate a dataset that guarantees global identification ofMY over X × Y and thus robust conclusions about any future deployable model fφ. One interesting and relevant setting that satisfies this property is the differentially private release of predictions through an additive Laplace (or Gaussian) noise mechanism applied to the output of the prediction function [13].1 While standard in the literature, a caveat of identification from randomization is that there are several reasons a decision-maker may choose not to deploy a randomized prediction function in performative environments, including negative externalities and concerns about user welfare [29], but also business interests to preserve consumer value of the prediction-based service offered. In the context of our credit scoring example, random predictions would imply that interest rates are randomly assigned to applicants in order to learn how the rates impact their probability of paying back. We can not presently observe this scenario, given regulatory requirements for lending institutions. 3.4 Identifiability through overparameterization The following two sections consider situations where we can achieve identification, without randomization, from data collected under a deterministic fθ. Our first result exploits incongruences in functional complexity arising from machine learning models that are overparameterized [e.g. 30]. By overparameterization, we refer to the fact that the representational complexity of the model is larger than the underlying concept it needs to describe. Assumption 1 (overparameterization). We say a function f is overparameterized with respect to G over X if there is no function g′ ∈ G and c ∈ R such that f(x) = c · g′(x) for all x ∈ X . A challenge for identification is that for deterministic fθ the prediction can be reconstructed from X without relying on Ŷ , and thus h(x, ŷ) =MY (x, fθ(x)) can not be differentiated fromMY based on observational data. However, note that this ambiguity relies on there being an admissable h such that h(·, ŷ) for a fixed ŷ can represent fθ. If fθ is overparameterized with respect to the hypothesis classH, this ambiguity is resolved. Let us make this intuition concrete with an example: Example 3.1. Assume the structural equation for y in Figure 1 is g(x, ŷ) = αx + βŷ for some unknown α, β. Consider prediction functions fθ of the following form fθ(x) = γx2 + ξx for some γ, ξ ≥ 0. ConsiderH be the class of linear functions. Then, any admissable estimate h ∈ H takes the form h(x, ŷ) = α′x+ β′ŷ. For h to be consistent with observations we need α′ + β′ξ = α+ βξ and β′γ = βγ. This system of equations has a unique solution as long as γ > 0 which corresponds to the case where fθ is overparameterized with respect to H. In contrast, for γ = 0 the function h(x, ŷ) = (α+ βξ)x would explain the training data equally well. The following result generalizes this argument to separable functions. Proposition 5. Consider the structural causal model in Figure 1 where fθ is a deterministic function. Assume that g can be decomposed as g(X, Ŷ ) = g1(X) + αŶ for some α > 0 and g1 ∈ G, where the function class G is closed under addition (i.e. g1, g2 ∈ G ⇒ a1 · g1 + a2 · g2 ∈ G ∀a1, a2 ∈ R). Let H contain functions that are separable in X and Ŷ , linear in Ŷ , and ∀h ∈ H it holds that h(·, ŷ) ∈ G for a fixed ŷ. Then, if fθ is overparameterized with respect to G over the support of DX , MY is identifiable from D(fθ). 3.5 Identifiability from classification A second ubiquitous source of incongruence that we can exploit for identification is the discrete nature of predictions in the context of classification. The resulting discontinuity in the relationship between X and Ŷ enables us to disentangleMY from the direct effect of X on Y . This identification strategy is akin to the popular regression discontinuity design [33] and relies on the assumption that all other variables in X are continuously related to Y around the discontinuities in Ŷ . 1In Appendix B we discuss two additional natural sources of randomness (randomized decisions and noisy measurements of covariates) that can potentially help identification with appropriate side-information. Proposition 6. Consider the structural causal model in Figure 1 where fθ is a deterministic function. Assume that the structural equation for Y is separable g(X, Ŷ ) = g1(X) + g2(Ŷ ),∀X, Ŷ for some differentiable functions g1 and g2. Further, suppose X is a continuous random variable and Ŷ is a discrete random variable that takes on at least two distinct values with non-zero probability. Then, MY is identifiable from D(fθ). Similar to Proposition 5, the separability assumption together with incongruence provides a way to disentangle the direct effect from the indirect effect of X on Y . Separability is necessary in order to achieve global identification guarantees without randomness, the identification of entangled components without overlap is fundamentally hard. Thus, under violations of the separability assumptions, we can only expect the separable components of g to be correctly identified. Similarly, a regression discontinuity design only enables the identification of the causal effect locally around the discontinuity. Extrapolation away from the decision boundary to models fφ that are substantially different from fθ increasingly relies on separability to hold true. 4 Empirical evaluation We investigate empirically how well the supervised learning solution hSL in (8) is able to identify the causal mechanismMY from observational data in practical settings with finite data. Methodology. We generated semi-synthetic data for our experiments, using a Census income prediction dataset from folktables.org [11]. Using this dataset as a starting point, we simulate a training dataset and test dataset with distribution shift as follows: First, we choose two different predictors fθ and fφ to predict a target variable of interest (e.g. income) from covariates X (e.g. age, occupation, education, etc.). If not specified otherwise, fθ is fit to the original dataset to minimize squared error, while fφ is trained on randomly shuffled labels. Next, we posit a function g for simulating the performative effects. Then, we generate a training dataset of (X, Ŷ , Y ) tuples from the causal model in Figure 1, using the covariates X from the original data, g, and fθ to generate Ŷ and Y . Similarly, we generate a test dataset of (X, Ŷ , Y ) tuples, using X, g, fφ. We assess how well supervised methods learn transferable functional relationships by fitting a model hSL to the training dataset and then evaluating the root mean squared error (RMSE) for regression and the accuracy for classification on the test dataset. In our figures, we visualize the standard error from 10 replicates with different random seeds and we compare it to an in-distribution baseline trained and evaluated on samples of D(fφ). If not specified otherwise we use N = 200, 000 samples. 4.1 Necessity of identification guarantees for supervised learning We start by illustrating why our identification guarantees are crucial for supervised learning under performativity. Therefore, we instantiate the structural equation g in Figure 1 as g(X, Ŷ ) = g1(X) + αŶ (10) with g1(X) = β>X and ξY ∼ N (0, 1). The coefficients β are determined by linear regression on the original dataset. The hyperparameter α quantifies the performativity strength that we vary in our experiments. The predictions Ŷ are generated from a linear model fθ that we modify to illustrate the resulting impact on identifiability. We optimize hSL in (8) overH being the class of linear functions. We start by illustrating a failure mode of supervised learning in a non-identifiability setting (Proposition 3). Therefore, we let fθ be a deterministic linear model fit to the base dataset (fθ(X) ≈ β>X). This results inMY not being identifiable from D(fθ). In Figure 2(a) we can see that supervised learning indeed struggles to identify a transferable functional relationship from the training data. The meta model returns hSL(X, Ŷ ) = (1 + α)Ŷ , instead of identifying g, which leads to a high extrapolation error independent of the strength of performativity. While we only show the error for one fφ in Figure 2(a), the error grows with the distance d2Dx(fθ, fφ). In contrast, when the feature Ŷ is not included, the supervised learning strategy returns hSL(X) = (1 + α)β>X . The extrapolation loss of this performativity-agnostic model scales with the strength of performativity (Proposition 1) and is thus strictly smaller than the error of the model that predicts from predictions. Next, we move to the regime of our identification results (Proposition 4-6). Therefore, we modify the way the predictions in the training data are generated. In Figure 2(b) we use additive Gaussian noise to determine the predictions as Ŷ = fθ(X) + η with η ∈ N (0, σ2). In Figure 2(c) we augment the input to fθ with second-degree polynomial features to achieve overparameterization. In Figure 2(d) we round the predictions of fθ to obtain discrete values. In all three cases, including Ŷ as a feature is beneficial and allows the model to match in-distribution accuracy baselines, closing the extrapolation gap that is inevitable for performativity-agnostic prediction. 4.2 Strength of incongruence and finite samples We next conduct an ablation study and investigate how the degree of overparameterization and the noise level for randomized fθ impacts the extrapolation performance of supervised learning. Therefore, we consider the setup in (10) with a general function g1. We fix the level of performativity at α = 0.5 for this experiment. We optimize hSL in (8) overH (which we vary). In Figure 3(a) we investigate the effect of overparameterization of fθ on the extrapolation error of hSL. We choose fully connected neural networks with a single hidden layer to represent the functions g1, fθ and hSL. For g1 andH we take a neural network with m = 3 units in the hidden layer. The model g1 is fit it to the original dataset. We vary the number of units in the hidden layer of fθ, denoted mθ. As expected, the extrapolation error decreases with the complexity of fθ. As soon as mθ > mφ there is a significant benefit to including predictions as features. In this regime,MY becomes identifiable as Proposition 5 suggests. In turn, without access to Ŷ the model suffers an inevitable extrapolation gap due to a concept shift that is independent of the properties of fθ. In Figure 2(b) we investigate the effect of the magnitude of additive noise added to the predictions. HereH and g1 are linear functions. We have Ŷ = fθ(X) + βη with η ∈ N (0, 1) and we vary the noise level β. We see that even small amounts of noise are sufficient for identification and adding Ŷ as a feature to our meta-machine lenaring model is effective as soon as the noise in fθ is non-zero. In Figure 2(c) we fix the noise level at α = 0.5 and vary the number of samples N . We find that only moderate dataset sizes are necessary for predicting from predictions to approximateMY in our identifiable settings. 5 Discussion This paper focused on identifying the causal effect of predictions on outcomes from observational data. We point out several natural situations where this causal question can be answered, but we also highlight situations where observational data is not sufficiently informative to reason about performative effects. By establishing a connection between causal identifiability and the feasibility of anticipating performative effects using data-driven techniques, this paper contributes to a better understanding of the suitability of supervised learning techniques for explaining social effects arising from the deployment of predictive models in economically and socially relevant applications. We hope the positive results in this work serve as a message for data-collection: only if predictions are observed, they can be incorporated to anticipate the performative effects of future model deployments. Thus, access to this information is crucial for an analyst hoping to understand the effects of deployed predictive models, an engineer hoping to foresee consequences of model updates, or a researcher studying performative phenomena. To date, such data is scarcely available in benchmark datasets, hindering the progress towards a better understanding of performative effects, essential for the reliable deployment of algorithmic systems in the social world. At the same time we have shown that the deterministic nature of prediction poses unique challenges for causal identifiability even if Ŷ is observed. Thus, the success of observational designs (as shown in our empirical investigations) is closely tied to the corresponding identifiability conditions being satisfied. Our results must not be understood as a green-light to justify the use of supervised learning techniques to address performativity in full generality beyond the scope of our theoretical results. Limitations and Extensions. The central assumption of our work is the causal model in Figure 1. While carving out a rich and interesting class of performative prediction problems that allows us to articulate the challenges of covariates and predictions being coupled, it can not account for all mechanisms of performativity. This in turn gives rise to interesting questions for follow-up studies. A first neglected aspect is performativity through social influence. Our causal model, relies on the stable unit treatment value assumption (SUTVA) [23]. There is no possibility for the prediction of one individual to impact the outcome of his or her peers. Such an individualistic perspective is not unique to our paper but prevalent in existing causal analyses and model-based approaches to performative prediction and strategic classification [e.g., 20, 25, 43, 3, 18, 22]. Spillover effects [cf. 60, 64, 1, 40] are yet unexplored in the context of performative prediction. Nevertheless, they have important implications for how causal effects should be estimated and interpreted. In the context of our work they imply that an intervention on f can no longer be explaind solely by changing an individual’s prediction. As a result, approaches for microfounding performative effect based on models learned from simple, unilateral interventions on an individual’s prediction result in different causal estimates than supervised learning based methods for identification as studied in this work. A preliminary study included in Appendix C shows that data-driven techniques can pick up on interference patterns in the data and benefit from structural properties such as network homophily [19], whereas individualistic modeling misses out on the indirect component arising from neighbors influencing each other. A second aspect is performativity in non-causal prediction. Our model posits that prediction is solely based on features X that are causal for the outcome Y . This is a desirable situation in many practical applications because causal predictions disincentivize gaming of strategic individuals manipulating their features [43, 3] and offer explanations for the outcome that persist across environments [54, 7]. Nevertheless, non-causal variables are often included as input features in practical machine learning prediction tasks. Establishing a better understanding for the implications of the resulting causal dependencies due to performativity could be an important direction for future work. Finally, performative effect can also lead to covariate shift and impact the joint distribution P (X,Y ) = P (Y |X)P (X) over covariates and labels. We assumed that performative effects only surface in P (Y |X). For our theoretical results, this implied that overlap in the X variable across environments is trivially satisfied, which enabled us to pinpoint the challenges of learning performative effects due to the coupling between X and Ŷ . For establishing identification in the presence of a causal arrow fθ → X additional steps are required to ensure identifiability. Acknowledgement The authors would like to thank Moritz Hardt and Lydia Liu for many helpful discussions throughout the development of this project, Tijana Zrnic, Krikamol Muandet, Jacob Steinhardt, Meena Jagadeesan and Juan Perdomo for feedback on the manuscript, and Gary Cheng for helpful discussions on differential privacy. We are also grateful for a constructive discourse and valuable feedback provided by the reviewers that greatly helped improve the manuscript. 6 Paper checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [Yes] (c) Did you discuss any potential negative societal impacts of your work? [Yes] see Appendix F. (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] (b) Did you include complete proofs of all theoretical results? [Yes] 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experi- mental results (either in the supplemental material or as a URL)? [Yes] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] (c) Did you report error bars (e.g., with respect to the random seed after running experi- ments multiple times)? [Yes] (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [Yes] (c) Did you include any new assets either in the supplemental material or as a URL? [Yes] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [Yes] (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [Yes] 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the focus and contribution of the paper regarding counterfactual outcomes and performative prediction? 2. What are the strengths of the proposed approach, particularly in terms of identification results and experimental reproducibility? 3. What are the weaknesses of the paper, especially regarding the scope of the work and the originality of the results? 4. Do you have any questions regarding the overparametrization identification result and its application to real-world models? 5. Is P(U|Y, Y^) necessary to know for identification, or is P(U|Y^) enough? Can you provide examples where knowing P(U|Y, Y^) would be beneficial? 6. Are there any typos or errors in the paper that need to be addressed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper studies the problem of estimating counterfactual outcomes under a different predictive model, thus drawing a link between causal inference and the recent literature on performative prediction. The paper starts with hardness of performativity-agonostic learning and then describes identification results that enable the supervised learning approach to work. The authors also conduct experiments showing that the identification strategies work well in simulations and mild violations of assumptions are not of concern. Strengths And Weaknesses Strengths: The writing is clear and the problem is well motivated and stated. The identification results are clearly stated with enough explanations and examples. Overall the paper is very clear in exposition. The problem studied is also having potential significance in real world examples as non-stationarity arises, which I particuarly like. The experiements are sound with code available for reproducibility. Weakness: The scope of the work is somewhat limited since the effect of the predictive model is only assumed to affect Y ^ (the authors acknowledge this in section 2.1). When restricting to this particular causal graph, the first two identification results are essentially direct application of existing results, thus limiting the originality of the work. Also, it would be nice to see some real examples in the experiment section. Questions In the overparametrization identification result, the author mentions neural network as one of the examples. I am curious about if there is any empirical results illustrating this identification result under this complex model as a lot of the real world models are indeed getting more and more complex. In proposition 4, is P ( U | Y , Y ^ ) necessary to know or P ( U | Y ^ ) is enough to enable identification? Could you say more about applicability of knowing P ( U | Y , Y ^ ) in some real world examples, like which examples you mentioned seem more reasonable assuming this? Some possible typos: line 293 in main text, f θ we will, removing we? line 307, should there be Y j after the expectation? Limitations The author is very clear about the limitation of the work.
NIPS
Title Anticipating Performativity by Predicting from Predictions Abstract Predictions about people, such as their expected educational achievement or their credit risk, can be performative and shape the outcome that they are designed to predict. Understanding the causal effect of predictions on the eventual outcomes is crucial for foreseeing the implications of future predictive models and selecting which models to deploy. However, this causal estimation task poses unique challenges: model predictions are usually deterministic functions of input features and highly correlated with outcomes. This can make the causal effect of predictions on outcomes impossible to disentangle from the direct effect of the covariates. We study this problem through the lens of causal identifiability. Despite the hardness of this problem in full generality, we highlight three natural scenarios where the causal effect of predictions can be identified from observational data: randomization in predictions, overparameterization of the predictive model deployed during data collection, and discrete prediction outputs. Empirically we show that given our identifiability conditions hold, standard variants of supervised learning that predict from predictions by treating the prediction as an input feature can find transferable functional relationships that allow for conclusions about newly deployed predictive models. These positive results fundamentally rely on model predictions being recorded during data collection, bringing forward the importance of rethinking standard data collection practices to enable progress towards a better understanding of social outcomes and performative feedback loops. 1 Introduction Predictions can impact sentiments, alter expectations, inform actions, and thus change the course of events. Through their influence on people, predictions have the potential to change the regularities in the population they seek to describe and understand. This insight underlies the theories of performativity [38] and reflexivity [62] that play an important role in modern economics and finance. Recently, Perdomo et al. [51] pointed out that the social theory of performativity has important implications for machine learning theory and practice. Prevailing approaches to supervised learning assume that features X and labels Y are sampled jointly from a fixed underlying data distribution that is unaffected by attempts to predict Y from X . Performativity questions this assumption and suggests that the deployment of a predictive model can disrupt the relationship between X and Y . Hence, changes to the predictive model can induce shifts in the data distribution. For example, consider a lender with a predictive model for risk of default – performativity could arise if individuals who are predicted as likely to default are given higher interest loans, which make default even more likely [41], akin to a self-fulfilling prophecy. In turn, a different predictive model that predicts smaller risk and suggests offering more low-interest loans could cause some individuals who previously looked risky 36th Conference on Neural Information Processing Systems (NeurIPS 2022). to be able to pay the loans back, which would appear as a shift in the relationship between features X and loan repayment outcomes Y . This performative nature of predictions poses a challenge to using historical data to predict the outcomes that will arise under the deployment of future models. 1.1 Our work In this work, we aim to understand under what conditions observational data is sufficient to identify the performative effects of predictions. Only when causal identifiability is established can we rely on data-driven strategies to anticipate performativity and reason about the downstream consequences of deploying new models. Towards this goal, we focus on a subclass of performative prediction problems in this paper where performative effects of predictions solely surface as a shift in the outcome variable, and the distribution over covariates X is unaffected by the prediction Ŷ . Our goal is to identify the expected counterfactual outcome MY (x, ŷ) , E[Y |X = x, do(Ŷ = ŷ)]. Understanding the causal mechanismMY is crucial for model evaluation, as well as model optimization. In particular, it allows for offline evaluation of the potential outcome Y of an individual X subject to a predictive model fnew with the prediction Ŷ = fnew(X) before actually deploying it. The need for observing predictions. We start by illustrating the hardness of performativity-agnostic learning by relating performative prediction to a concept shift problem. Using the specifics of the performative shift, we establish a lower bound on the extrapolation error of predicting Y from X under the deployment of a new model fnew that is different from the model ftrain deployed during data collection. In particular, the extrapolation error grows with the distance between the prediction functions of the two models and the strength of performativity. This lower bound on the extrapolation error demonstrates the necessity to take performativity into account for reliably predicting Y . Predicting from predictions. We then explore the feasibility of learning performative effects when the training data recorded the predictions and training data samples (X,Y, Ŷ ) are available. As an identification strategy for learningMY , we focus on building a meta machine learning model that predicts Y for an individual with features X , subjected to a prediction Ŷ . We term this data-driven strategy predicting from predictions; it treats the predictions as an input to the meta machine learning model. The meta model seeks to answer “what would the outcome be if we were to deploy a different prediction model?” Crucially, this “what if” question is causal in nature; it aims to understand the potential outcome under an intervention which is different from merely estimating the outcome variable in previously seen data. Whether such a transferable model is learnable depends on whether the training data provides causal identifiability [49] Only after causal identifiability is established can we rely on observational data to select and design optimal prediction models under performativity. Establishing identifiability. For our main technical results, we first show that, in general, observing Ŷ is not sufficient for identifying the causal effects of predictions. In particular, if the training data was collected under the deployment of a deterministic prediction function, the mechanismMY can not be uniquely identified. The reason is a lack of coverage in the training data as X and Ŷ are deterministically bound. Next, we establish several conditions under which observing Ŷ is sufficient for identifying MY . The first condition exploits the presence of randomness in the prediction. This randomness could be purposely built into the prediction for individual fairness, differential privacy, or other considerations. The second condition exploits the property that predictive models are often over-parameterized, which leads to incongruence in functional complexity between different causal paths, enabling the effects of predictions to be separated from other variables’ effects. The third condition takes advantage of discreteness in predictions such that performative effects can be disentangled from the continuous relationship between covariates and outcomes. Together, these results reveal that particularities of the performative prediction problem can enable us to recover the causal effect of predictions from observational data. In particular, we show that, under these conditions, standard supervised learning techniques can be used to find these transferable functional relationships by treating predictions as model inputs. Empirically, we demonstrate that supervised learning succeeds in findingMY even in finite samples. We conclude with a discussion of limitations and extensions of our work, pointing out potential violations of the modeling assumptions underlying our causal analysis and proposing directions for future work. 1.2 Broader context and related work The work by Perdomo et al. [51], initiated the discourse of performativity in the context of supervised learning by pointing out that the deployment of a predictive model can impact the data distribution we train our models on. Existing scholarship on performative prediction [c.f., 51, 42, 12, 44, 24, 26, 68, 45, 52, 31] has predominantly focused on achieving a particular solution concept with a prediction function that maps X to Y in the presence of unknown performative effects. We are interested in understanding the underlying causal mechanism of the performative distribution shift. Our work is motivated by the seemingly natural approach of lifting the supervised-learning problem and incorporating the prediction as an input feature when building a meta machine learning model for explaining Y . By establishing a connection to causal identifiability, our goal is to understand when such a data-driven strategy can help anticipate the down stream effects of predictions This work focuses on the setting where predictions lead to changes in the relationship between covariates X and label Y , while the marginal distribution P (X) over covariates is assumed to be fixed. This setting where performativity only surfaces in the label describes an interesting subclass of problems falling under the umbrella of performative (aka. model-induced or decision-dependent) distribution shifts [51, 37, 12]. Our assumptions are complementary to the strategic classification framework [8, 20] that focuses on a setting where performative effects concern P (X), while P (Y |X) is assumed to remain stable. Consequently, causal questions in strategic classification [e.g., 22, 3, 59] are concerned with identifying stable causal relationships between X and Y . Since we assume P (Y |X) can change (i.e. the true underlying ’concept’ determining outcomes can change), conceptually different questions emerge in our work. Similar in spirit to strategic classification, the work on algorithmic recourse and counterfactual explanations [32, 28, 65] focuses on the causal link between features and predictions, whereas we focus on the down-stream effects of predictions. There are interesting parallels between our work and related work on the offline evaluation of online policies [e.g., 35, 63, 36, 58]. In particular, [63] explicitly emphasize the importance of logging propensities of the deployed policy during data collection to be able to mitigate selection bias. In our work the deployed model can induce a concept shift. Thus, we find that additional information about the predictions of the deployed model needs to be recorded to be able to foresee the impact of a new predictive model on the conditional distribution P (Y |X), beyond enabling propensity weighting [55]. A notable work by [66] investigates how predictions at one time step impact predictions in future time steps. Complementary to these existing works we show that randomness in the predictive model is not the only way causal effects of predictions can be identified. For our theoretical results, we build on classical tools from causal inference [48, 57, 64]. In particular, we distill unique properties of the performative prediction problem to design assumptions for the identifiability of the causal effect of predictions. 2 The causal force of prediction Predictions can be performative and impact the population of individuals they aim to predict. Formulized it in the language of causal inference [48]: the deployment of a predictive model represents an intervention on a causal diagram that describes the underlying data generation process of the population. We will expand on this causal perspective to study an instance of ths performative prediction problem described below. 2.1 Prediction as a partial mediator Consider a machine learning application relying on a predictive model f that maps features X to a predicted label Ŷ . We assume the predictive model f is performative in that the prediction Ŷ = f(X) has a direct causal effect on the outcome variable Y of the individual it concerns. Thereby the prediction impacts how the outcome variable Y is generated from the features X . The causal diagram illustrating this setting is visualized in Figure 1. The features X ∈ X ⊆ Rd are drawn i.i.d. from a fixed underlying continuous distribution over covariates DX with support X . The outcome Y ∈ Y ⊆ R is a function of X , partially mediated by the prediction Ŷ ∈ Y . The prediction Ŷ is determined by the deployed predictive model f : X → Y . For a given prediction function f , every individual is assumed to be sampled i.i.d. from the data generation process described by the causal graph in Figure 1. We assume the exogenous noise ξY is zero mean, and ξf allows the prediction function to be randomized. Note that our model is not meant to describe performativity in its full generality (which includes other ways f may affect P (X,Y )). Rather, it describes an important and practically relevant class of performative feedback problems that are characterized by two properties: 1) performativity surfaces only in the label Y , and 2) performative effects are mediated by the prediction, such that Y ⊥ f | Ŷ , rather than dependent on the specifics of the decision rule. Application examples. Causal effects of predictions on outcomes have been documented in multiple contexts: A bank’s prediction about the client (e.g., his or her creditworthiness in applying for a loan) determines the interest rate assigned to them, which in turn changes a client’s financial situation [41]. Mathematical models that predict stock prices inform the actions of traders and thus heavily shape financial markets and economic realities [38]. Zillow’s housing price predictions directly impact sales prices [39]. Predictions about the severity of an illness play an important role in treatment decisions and hence the very chance of survival of the patient [34]. Another prominent example from psychology is the Pygmalion effect [56]. It refers to the phenomenon that high expectations lead to improved performance, which is widely documented in the context of education [6], sports [61], and organizations [16]. Examples of such performativity abound, and we hope to have convinced the reader that the performative effects in the label are important for algorithmic prediction. 2.2 Implications for performativity-agnostic learning Begin with considering the classical supervised learning task where Ŷ is unobserved. The goal is to learn a model h : X → Y for predicting the label Y from the features X . To understand the inherent challenge of classical prediction under performativity, we investigate the relationship between X and Y more closely. Specifically, the data generation process (Figure 1) implies that P (Y |X) = ∫ P (Y |Ŷ , X)P (Ŷ |X)dŶ . (4) This expression makes explicit how the relationship between X and Y that we aim to learn depends on the predictive model governing P (Ŷ |X). As a consequence, when the deployed predictive model at test time differs from the model at training time, performative effects surface as concept shift [17]. Such distribution shift problems are known to be intractable without structural knowledge about the shift, implying that we can not expect h to generalize to distributions induced by future model deployments. Let us inspect the resulting extrapolation gap in more detail and put existing positive results on performative prediction into perspective. Extrapolation loss. We illustrate the effect of performativity on predictive performance using a simple instantiation of the structural causal model from Figure 1. Therefore, assume a linear performative effect of strength α > 0 and a base function g1 : X → Y g(X, Ŷ ) := g1(X) + αŶ . (5) Now, assume we collect training data under the deployment of a predictive model fθ and validate our model under the deployment of fφ. We adopt the notion of a distribution map from Perdomo et al. [51] and write DXY (f) for the joint distribution over (X,Y ) surfacing from the deployment of a model f . We assess the quality of our predictive model h : X → Y over a distribution DXY (f) induced by f via the loss function ` : Y × Y → R and write Rf (h) := Ex,y∼DXY (f)`(h(x), y) for the risk of h on the distribution induced by f . We use h∗f for the risk minimizer h ∗ f := argminh∈HRf (h), and H for the hypothesis class we optimize over. Proposition 1 bounds the extrapolation loss and can be viewed as a concrete instantiation of the more general extrapolation bounds for performative prediction discussed in [37] within the feedback model from Figure 1. Proposition 1 (Hardness of performativity-agnostic prediction). Consider the data generation process in Figure 1 with g given in (5) and fθ, fφ being deterministic functions. Take a loss function ` : Y × Y → R that is γ-smooth and µ-strongly convex in its second argument. Let h∗fθ be the risk minimizer over the training distribution and assume the problem is realizable, i.e., h∗fθ ∈ H. Then, we can bound the extrapolation loss of h∗fθ on the distribution induced by fφ as γ 2 α2d2DX (fθ, fφ) ≥ ∆Rfθ→fφ(h ∗ fθ ) ≥ µ 2 α2d2DX (fθ, fφ) (6) where d2DX (fθ, fφ) := Ex∼DX (fθ(x)− fφ(x)) 2 and ∆Rfθ→fφ(h) := Rfφ(h)− Rfθ (h). The extrapolation loss ∆Rfθ→fφ(h ∗ fθ ) is zero if and only if either the strength of performativity tends to zero (α → 0), or the predictions of the two predictors fθ and fφ are identical over the support of DX . If this is not the case, an extrapolation gap is inevitable. This elucidates the fundamental hardness of performative prediction from feature, label pairs (X,Y ) when performative effects disrupt the causal relationship between X and Y . The special case where α = 0 aligns with the assumption of classical supervised learning, in which there is no performativity. This may hold in practice if the predictive model is solely used for descriptive purposes, or if the agent making the prediction does not enjoy any economic power [21]. The second special case where the extrapolation error is small is when d2DX (fθ, fφ)→ 0. In which case DXY (fθ) and DXY (fφ) are equal in distribution and hence exhibit the same risk minimizer. Such a scenario can happen, for example, if the model fφ is obtained by retraining fθ on observational data and a fixpoint is reached (fθ = h∗fθ ). The convergence of policy optimization strategies to such fixpoints (perfromative stablity) has been studied in prior work [e.g., 51, 42, 12] and enabled optimality results even in the presence of performative concept shifts, relying on the target model fφ not being chosen arbitrarily, but based on a pre-specified update strategy. 3 Identifying the causal effect of prediction Having illustrated the hardness of performativity-agnostic learning, we explore under what conditions incorporating the presence of performative predictions into the learning task enables us to anticipate the perfromative effects of Ŷ on Y . Towards this goal, we assume that the mediator Ŷ in Figure 1 is observed—the prediction takes on the role of the treatment in our causal analysis and we can not possibly hope to estimate the treatment effect of a treatment that is unobserved. 3.1 Problem setup Assume we are given access to data points (x, ŷ, y) generated i.i.d. from the structural causal model in Figure 1 under the deployment of a prediction function fθ. From this observational data, we wish to estimate the expected potential outcome of an individual under the deployment of an unseen (but known) predictive model fφ. We note that given our causal graph, the implication of intervening on the function f can equivalently be explained by an intervention on the prediction Ŷ . Thus, we are interested in identifying the causal mechanism: MY (x, ŷ) := E[Y |X = x,do(Ŷ = ŷ)]. (7) Unlike P (Y |X), the mecahnismMY is invariant to the changes in the predictive model governing P (Ŷ |X). Thus, being able to identifyMY will allow us to make inferences about the potential outcome surfacing from planned model updates beyond explaining patterns in historical data. We can evaluateMY to infer y for any x at ŷ = fφ(x) for fφ being the model of interest. For simplicity of notation, we will write D(fθ) to denote the joint distribution over (X, Ŷ , Y ) of the observed data collected under the deployment of the predictive model fθ. We sayMY can be identified, if it can uniquely be expressed as a function of observed data. More formally: Definition 1 (identifiability). Given a predictive model f , the causal graph in Figure 1, and a set of assumptions A. We sayMY is identifiable from D(f), if for any function h that complies with assumptions A and h(x, ŷ) = MY (x, ŷ) for pairs (x, ŷ) ∈ supp(DXY (f)) it must also hold that h(x, ŷ) =MY (x, ŷ) for all pairs (x, ŷ) ∈ X × Y . Without causal identifiability, there might be models h′ 6=MY that explain the training distribution equally well but do not transfer to the distribution induced by the deployment of a new model. Causal identifiability is crucial for enabling extrapolation. It quantifies the limits of what we can infer given access to the training data distribution, ignoring finite sample considerations. Identification with supervised learning. Identifiability ofMY from samples of D(fθ) implies that the historical data collected under the deployment of fθ contains sufficient information to recover the invariant relationship (7). As a concrete identification strategy, consider the following standard variant of supervised learning that takes in samples (x, ŷ, y) and builds a meta-model that predicts Y from X, Ŷ by solving the following risk minimization problem hSL := argmin h∈H E(x,ŷ,y)∼D(fθ) [ (h(x, ŷ)− y)2 ] . (8) whereH denotes the hypothesis class. We consider the squared loss for risk minimization because it pairs well with the exogeneous noise ξY in (3) being additive and zero mean. The strategy (8) is an instance of what we term predicting from predictions. Lemma 2 provides a sufficient condition for the supervised learning solution hSL to recover the invariant causal quantityMY . Lemma 2 (Identification strategy). Consider the data generation process in Figure 1 and a set of assumptions A. Given a hypothesis classH such that every h ∈ H complies with A and the problem is realizable, i.e., MY ∈ H. Then, if MY is causally identifiable from D(fθ) given A, the risk minimizer hSL in (8) will coincide withMY . 3.2 Challenges for identifiability The main challenge for identification ofMY from data is that in general, the prediction rule fθ which produces Ŷ is a deterministic function of the covariates X . This means that, for any realization of X , we only get access to one Ŷ = fθ(X) in the training distribution, which makes it challenging to disentangle the direct and the indirect effects of X on Y . To illustrate this challenge, consider the function h(x, ŷ) :=MY (x, fθ(x)) that ignores the input parameter ŷ and only relies on x for explaining the outcome. This function explains y equally well and can not be differentiated from MY based on data collected under the deployment of a deterministic prediction rule fθ. The problem is akin to fitting a linear regression model to two perfectly correlated covariates. More broadly, this ambiguity is due to what is known as a lack of overlap (or lack of positivity) in the literature of causal inference [47, 23]. In the covariate shift literature, the lack of overlap surfaces when the covariate distribution violates the common support assumption and the propensity scores are not well-defined (see e.g., Pan and Yang [46]). This problem renders causal identification and thus data-driven learning of performative effects from deterministic predictions fundamentally challenging. Proposition 3 (Nonidentifiability from deterministic predictions). Consider the structural causal model in Figure 1. Assume Y non-trivially depends on Ŷ , and the set Y is not a singleton. Then, given a deterministic prediction function f , the mechanismMY is not identifiable from D(f). The identifiability issue persists as long as the two variables X , Ŷ are deterministically bound and there is no incongruence or hidden structure that can be exploited to disentangle the direct effect of X on Y from the indirect effect mediated by Ŷ . In the following, we focus on particularities of prediction problems and show how they allow us to identifyMY . 3.3 Identifiability from randomization We start with the most natural setting that provides identifiability guarantees: randomness in the prediction function fθ. Using standard arguments about overlap [47] we can identifyMY (x, ŷ) for any pair x, ŷ with positive probability in the data distribution D(fθ) from which the training data is sampled. To relate this to our goal of identifying the outcome under the deployment of an unseen model fφ we introduce the following definition: Definition 2 (output overlap). Given two predictive models fθ, fφ, the model fφ is said to satisfy output overlap with fθ, if for all x ∈ X and any subset Y ′ ⊆ Y with positive measure, it holds that P[fφ(x) ∈ Y ′] P[fθ(x) ∈ Y ′] > 0. (9) In particular, output overlap requires the support of the new model’s predictions fφ(x) to be contained in the support of fθ(x) for every potential x ∈ X . The following proposition takes advantage of the fact that the joint distribution over (X,Y ) is fully determined by the deployed model’s predictions to relate output overlap to identification: Proposition 4. Given the causal graph in Figure 1, the mechanismMY (x, ŷ) is identifiable from D(fθ) for any pair x, ŷ with ŷ = fφ(x), as long as fφ is a prediction function that satisfies output overlap with fθ. Proposition 4 allows us to pinpoint the models fφ to which we can extrapolate to from data collected under fθ. Furthermore, it makes explicit that for collecting data to learn about performative effects, it is ideal to deploy a predictor fθ that is randomized so that the prediction output has full support over Y for any x. Such a model would generate a dataset that guarantees global identification ofMY over X × Y and thus robust conclusions about any future deployable model fφ. One interesting and relevant setting that satisfies this property is the differentially private release of predictions through an additive Laplace (or Gaussian) noise mechanism applied to the output of the prediction function [13].1 While standard in the literature, a caveat of identification from randomization is that there are several reasons a decision-maker may choose not to deploy a randomized prediction function in performative environments, including negative externalities and concerns about user welfare [29], but also business interests to preserve consumer value of the prediction-based service offered. In the context of our credit scoring example, random predictions would imply that interest rates are randomly assigned to applicants in order to learn how the rates impact their probability of paying back. We can not presently observe this scenario, given regulatory requirements for lending institutions. 3.4 Identifiability through overparameterization The following two sections consider situations where we can achieve identification, without randomization, from data collected under a deterministic fθ. Our first result exploits incongruences in functional complexity arising from machine learning models that are overparameterized [e.g. 30]. By overparameterization, we refer to the fact that the representational complexity of the model is larger than the underlying concept it needs to describe. Assumption 1 (overparameterization). We say a function f is overparameterized with respect to G over X if there is no function g′ ∈ G and c ∈ R such that f(x) = c · g′(x) for all x ∈ X . A challenge for identification is that for deterministic fθ the prediction can be reconstructed from X without relying on Ŷ , and thus h(x, ŷ) =MY (x, fθ(x)) can not be differentiated fromMY based on observational data. However, note that this ambiguity relies on there being an admissable h such that h(·, ŷ) for a fixed ŷ can represent fθ. If fθ is overparameterized with respect to the hypothesis classH, this ambiguity is resolved. Let us make this intuition concrete with an example: Example 3.1. Assume the structural equation for y in Figure 1 is g(x, ŷ) = αx + βŷ for some unknown α, β. Consider prediction functions fθ of the following form fθ(x) = γx2 + ξx for some γ, ξ ≥ 0. ConsiderH be the class of linear functions. Then, any admissable estimate h ∈ H takes the form h(x, ŷ) = α′x+ β′ŷ. For h to be consistent with observations we need α′ + β′ξ = α+ βξ and β′γ = βγ. This system of equations has a unique solution as long as γ > 0 which corresponds to the case where fθ is overparameterized with respect to H. In contrast, for γ = 0 the function h(x, ŷ) = (α+ βξ)x would explain the training data equally well. The following result generalizes this argument to separable functions. Proposition 5. Consider the structural causal model in Figure 1 where fθ is a deterministic function. Assume that g can be decomposed as g(X, Ŷ ) = g1(X) + αŶ for some α > 0 and g1 ∈ G, where the function class G is closed under addition (i.e. g1, g2 ∈ G ⇒ a1 · g1 + a2 · g2 ∈ G ∀a1, a2 ∈ R). Let H contain functions that are separable in X and Ŷ , linear in Ŷ , and ∀h ∈ H it holds that h(·, ŷ) ∈ G for a fixed ŷ. Then, if fθ is overparameterized with respect to G over the support of DX , MY is identifiable from D(fθ). 3.5 Identifiability from classification A second ubiquitous source of incongruence that we can exploit for identification is the discrete nature of predictions in the context of classification. The resulting discontinuity in the relationship between X and Ŷ enables us to disentangleMY from the direct effect of X on Y . This identification strategy is akin to the popular regression discontinuity design [33] and relies on the assumption that all other variables in X are continuously related to Y around the discontinuities in Ŷ . 1In Appendix B we discuss two additional natural sources of randomness (randomized decisions and noisy measurements of covariates) that can potentially help identification with appropriate side-information. Proposition 6. Consider the structural causal model in Figure 1 where fθ is a deterministic function. Assume that the structural equation for Y is separable g(X, Ŷ ) = g1(X) + g2(Ŷ ),∀X, Ŷ for some differentiable functions g1 and g2. Further, suppose X is a continuous random variable and Ŷ is a discrete random variable that takes on at least two distinct values with non-zero probability. Then, MY is identifiable from D(fθ). Similar to Proposition 5, the separability assumption together with incongruence provides a way to disentangle the direct effect from the indirect effect of X on Y . Separability is necessary in order to achieve global identification guarantees without randomness, the identification of entangled components without overlap is fundamentally hard. Thus, under violations of the separability assumptions, we can only expect the separable components of g to be correctly identified. Similarly, a regression discontinuity design only enables the identification of the causal effect locally around the discontinuity. Extrapolation away from the decision boundary to models fφ that are substantially different from fθ increasingly relies on separability to hold true. 4 Empirical evaluation We investigate empirically how well the supervised learning solution hSL in (8) is able to identify the causal mechanismMY from observational data in practical settings with finite data. Methodology. We generated semi-synthetic data for our experiments, using a Census income prediction dataset from folktables.org [11]. Using this dataset as a starting point, we simulate a training dataset and test dataset with distribution shift as follows: First, we choose two different predictors fθ and fφ to predict a target variable of interest (e.g. income) from covariates X (e.g. age, occupation, education, etc.). If not specified otherwise, fθ is fit to the original dataset to minimize squared error, while fφ is trained on randomly shuffled labels. Next, we posit a function g for simulating the performative effects. Then, we generate a training dataset of (X, Ŷ , Y ) tuples from the causal model in Figure 1, using the covariates X from the original data, g, and fθ to generate Ŷ and Y . Similarly, we generate a test dataset of (X, Ŷ , Y ) tuples, using X, g, fφ. We assess how well supervised methods learn transferable functional relationships by fitting a model hSL to the training dataset and then evaluating the root mean squared error (RMSE) for regression and the accuracy for classification on the test dataset. In our figures, we visualize the standard error from 10 replicates with different random seeds and we compare it to an in-distribution baseline trained and evaluated on samples of D(fφ). If not specified otherwise we use N = 200, 000 samples. 4.1 Necessity of identification guarantees for supervised learning We start by illustrating why our identification guarantees are crucial for supervised learning under performativity. Therefore, we instantiate the structural equation g in Figure 1 as g(X, Ŷ ) = g1(X) + αŶ (10) with g1(X) = β>X and ξY ∼ N (0, 1). The coefficients β are determined by linear regression on the original dataset. The hyperparameter α quantifies the performativity strength that we vary in our experiments. The predictions Ŷ are generated from a linear model fθ that we modify to illustrate the resulting impact on identifiability. We optimize hSL in (8) overH being the class of linear functions. We start by illustrating a failure mode of supervised learning in a non-identifiability setting (Proposition 3). Therefore, we let fθ be a deterministic linear model fit to the base dataset (fθ(X) ≈ β>X). This results inMY not being identifiable from D(fθ). In Figure 2(a) we can see that supervised learning indeed struggles to identify a transferable functional relationship from the training data. The meta model returns hSL(X, Ŷ ) = (1 + α)Ŷ , instead of identifying g, which leads to a high extrapolation error independent of the strength of performativity. While we only show the error for one fφ in Figure 2(a), the error grows with the distance d2Dx(fθ, fφ). In contrast, when the feature Ŷ is not included, the supervised learning strategy returns hSL(X) = (1 + α)β>X . The extrapolation loss of this performativity-agnostic model scales with the strength of performativity (Proposition 1) and is thus strictly smaller than the error of the model that predicts from predictions. Next, we move to the regime of our identification results (Proposition 4-6). Therefore, we modify the way the predictions in the training data are generated. In Figure 2(b) we use additive Gaussian noise to determine the predictions as Ŷ = fθ(X) + η with η ∈ N (0, σ2). In Figure 2(c) we augment the input to fθ with second-degree polynomial features to achieve overparameterization. In Figure 2(d) we round the predictions of fθ to obtain discrete values. In all three cases, including Ŷ as a feature is beneficial and allows the model to match in-distribution accuracy baselines, closing the extrapolation gap that is inevitable for performativity-agnostic prediction. 4.2 Strength of incongruence and finite samples We next conduct an ablation study and investigate how the degree of overparameterization and the noise level for randomized fθ impacts the extrapolation performance of supervised learning. Therefore, we consider the setup in (10) with a general function g1. We fix the level of performativity at α = 0.5 for this experiment. We optimize hSL in (8) overH (which we vary). In Figure 3(a) we investigate the effect of overparameterization of fθ on the extrapolation error of hSL. We choose fully connected neural networks with a single hidden layer to represent the functions g1, fθ and hSL. For g1 andH we take a neural network with m = 3 units in the hidden layer. The model g1 is fit it to the original dataset. We vary the number of units in the hidden layer of fθ, denoted mθ. As expected, the extrapolation error decreases with the complexity of fθ. As soon as mθ > mφ there is a significant benefit to including predictions as features. In this regime,MY becomes identifiable as Proposition 5 suggests. In turn, without access to Ŷ the model suffers an inevitable extrapolation gap due to a concept shift that is independent of the properties of fθ. In Figure 2(b) we investigate the effect of the magnitude of additive noise added to the predictions. HereH and g1 are linear functions. We have Ŷ = fθ(X) + βη with η ∈ N (0, 1) and we vary the noise level β. We see that even small amounts of noise are sufficient for identification and adding Ŷ as a feature to our meta-machine lenaring model is effective as soon as the noise in fθ is non-zero. In Figure 2(c) we fix the noise level at α = 0.5 and vary the number of samples N . We find that only moderate dataset sizes are necessary for predicting from predictions to approximateMY in our identifiable settings. 5 Discussion This paper focused on identifying the causal effect of predictions on outcomes from observational data. We point out several natural situations where this causal question can be answered, but we also highlight situations where observational data is not sufficiently informative to reason about performative effects. By establishing a connection between causal identifiability and the feasibility of anticipating performative effects using data-driven techniques, this paper contributes to a better understanding of the suitability of supervised learning techniques for explaining social effects arising from the deployment of predictive models in economically and socially relevant applications. We hope the positive results in this work serve as a message for data-collection: only if predictions are observed, they can be incorporated to anticipate the performative effects of future model deployments. Thus, access to this information is crucial for an analyst hoping to understand the effects of deployed predictive models, an engineer hoping to foresee consequences of model updates, or a researcher studying performative phenomena. To date, such data is scarcely available in benchmark datasets, hindering the progress towards a better understanding of performative effects, essential for the reliable deployment of algorithmic systems in the social world. At the same time we have shown that the deterministic nature of prediction poses unique challenges for causal identifiability even if Ŷ is observed. Thus, the success of observational designs (as shown in our empirical investigations) is closely tied to the corresponding identifiability conditions being satisfied. Our results must not be understood as a green-light to justify the use of supervised learning techniques to address performativity in full generality beyond the scope of our theoretical results. Limitations and Extensions. The central assumption of our work is the causal model in Figure 1. While carving out a rich and interesting class of performative prediction problems that allows us to articulate the challenges of covariates and predictions being coupled, it can not account for all mechanisms of performativity. This in turn gives rise to interesting questions for follow-up studies. A first neglected aspect is performativity through social influence. Our causal model, relies on the stable unit treatment value assumption (SUTVA) [23]. There is no possibility for the prediction of one individual to impact the outcome of his or her peers. Such an individualistic perspective is not unique to our paper but prevalent in existing causal analyses and model-based approaches to performative prediction and strategic classification [e.g., 20, 25, 43, 3, 18, 22]. Spillover effects [cf. 60, 64, 1, 40] are yet unexplored in the context of performative prediction. Nevertheless, they have important implications for how causal effects should be estimated and interpreted. In the context of our work they imply that an intervention on f can no longer be explaind solely by changing an individual’s prediction. As a result, approaches for microfounding performative effect based on models learned from simple, unilateral interventions on an individual’s prediction result in different causal estimates than supervised learning based methods for identification as studied in this work. A preliminary study included in Appendix C shows that data-driven techniques can pick up on interference patterns in the data and benefit from structural properties such as network homophily [19], whereas individualistic modeling misses out on the indirect component arising from neighbors influencing each other. A second aspect is performativity in non-causal prediction. Our model posits that prediction is solely based on features X that are causal for the outcome Y . This is a desirable situation in many practical applications because causal predictions disincentivize gaming of strategic individuals manipulating their features [43, 3] and offer explanations for the outcome that persist across environments [54, 7]. Nevertheless, non-causal variables are often included as input features in practical machine learning prediction tasks. Establishing a better understanding for the implications of the resulting causal dependencies due to performativity could be an important direction for future work. Finally, performative effect can also lead to covariate shift and impact the joint distribution P (X,Y ) = P (Y |X)P (X) over covariates and labels. We assumed that performative effects only surface in P (Y |X). For our theoretical results, this implied that overlap in the X variable across environments is trivially satisfied, which enabled us to pinpoint the challenges of learning performative effects due to the coupling between X and Ŷ . For establishing identification in the presence of a causal arrow fθ → X additional steps are required to ensure identifiability. Acknowledgement The authors would like to thank Moritz Hardt and Lydia Liu for many helpful discussions throughout the development of this project, Tijana Zrnic, Krikamol Muandet, Jacob Steinhardt, Meena Jagadeesan and Juan Perdomo for feedback on the manuscript, and Gary Cheng for helpful discussions on differential privacy. We are also grateful for a constructive discourse and valuable feedback provided by the reviewers that greatly helped improve the manuscript. 6 Paper checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [Yes] (c) Did you discuss any potential negative societal impacts of your work? [Yes] see Appendix F. (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] (b) Did you include complete proofs of all theoretical results? [Yes] 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experi- mental results (either in the supplemental material or as a URL)? [Yes] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] (c) Did you report error bars (e.g., with respect to the random seed after running experi- ments multiple times)? [Yes] (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [Yes] (c) Did you include any new assets either in the supplemental material or as a URL? [Yes] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [Yes] (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [Yes] 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the focus and contribution of the paper regarding causal inference in machine learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its motivation, clarity, and formal results? 3. How does the reviewer assess the paper's novelty and relevance to practical situations? 4. What are the limitations of the framework, and what are some potential directions for future work?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In the traditional supervised setting, we seek to predict a response Y based on features X , say as Y ^ = f θ ( X ) . The question this paper asks is the following: What happens when the prediction Y ^ itself influences Y ? Can we then still rigorously reason about our predictive model? To answer the above question, the paper makes the following contributions: It introduces a causal framework to answer the above question. Instead of the traditional regression estimand E [ Y ∣ X = x ] , the estimand of interest in this setting is the following: h ( x , y ^ ) = E [ Y ∣ X = x , d o ( Y ^ = y ^ ) ] It establishes three practical situations under which the above estimand is identified from observational data and may furthermore be estimated by supervised learning based on weighted empirical risk minimization. These three situations are the following: when noisy predictions are released (e.g., in differential privacy), when the features X are noisy measurements of latent features U , and under separability (along with overparameterization or discrete classification). A semi-synthetic study is conducted based on census data and Kaggle credit scoring data. Strengths And Weaknesses Strengths Motivation: The setting and problem tackled in this paper are exceptionally well-motivated. It is clear that rigorous reasoning about the causal effects of predictions is of great practical and theoretical importance. Clarity: The prose of the paper (but not the formal results, see below) is a joy to read and very clear. Weaknesses Previous work: I believe that the paper misses important previous references. Rigorously thinking about causal effects of predictions and seeking to estimate them is not a new idea, even within the NeurIPS community. For example, the following paper (also see references therein) addresses such a problem, and the underlying method is based on adding noise to the predictors (i.e., very similar to one of the identification strategies in the present paper): Wager, S., Chamandy, N., Muralidharan, O., & Najmi, A. (2014). Feedback detection for live predictors. Advances in Neural Information Processing Systems, 27. Formal results: Several of the formal results of the paper are not well-stated, have errors (e.g., in the proofs), and make incorrect references to previous work. See points below for a more detailed listing of such issues. Questions Extrapolation error Please clearly define what extrapolation error is (I do not think it is ever defined). It may be helpful to state the theoretical result (Proposition 1) with the middle term in the inequality replaced by R f ϕ ( ψ ( f θ ) ) − R f ϕ ( ψ ( f ϕ ) ) . Even though the 2nd term is 0 , it may be more instructive to express it as above. In Proposition 1, it seems that smoothness and strong convexity refer to the second argument of the loss function. If so, this should be clarified/explicitly stated. Overlap Front door adjustment: I did not understand the reference to front door adjustment in both the main text and the proof of Proposition 3. For example, in the proof, it is written that the front door adjustment formula is applied to Y ^ → T Y ^ → Y . But note that here X is actually observed, and is conditioned upon, so front door adjustment does not seem necessary? Instead it seems that the argument boils down to the fact that in this setting: E [ Y ∣ X = x , d o ( Y ^ = y ^ ) ] = E [ Y ∣ X = x , Y ^ = y ^ ] , and the RHS is now estimable/identifiable (while is is not under the conditions of Proposition 2). Page 6, Line 226: P ( T Y ^ = t ∣ X ) > 0 : This is not correct, this should be replaced by a statement about the conditional density of T Y ^ , or alternatively, stated as in Assumption 1. Noisy measurement of covariates "information about the noise distribution of the measurement error P ( U ∣ X = x ) ": Is the measurement error distribution not specified instead by p ( x ∣ U = u ) ? I do not understand the statement of Proposition 4. What are these weights exactly? They are stated as: w ( x , y ^ ) = P ( U | X = x ) / P ( U ∣ Y ^ = y ^ ) , but this does not make sense to me, since U is a latent random variable (and so the above could not be the density at a fixed u ). Also why does the beginning of the statement state that knowledge of P ( U ∣ Y ^ = y ^ , Y = y ) is required even though only P ( U ∣ Y ^ = y ^ ) is included in the weight definition? Proposition 4: In the proof of Proposition 4, the equality in line (19) is incorrect. Separability: Proposition 5: It would be very helpful if the specification of H in this statement could be more explicit. Example 3.1: "inferring β = c 1 and α = c 1 − c 2 ": I think for α one would infer α = c 1 instead of α = c 1 − c 2 . Proposition 6: Here the main text claims that results for the regression discontinuity design are used, but then the actual proof cites a result from Wang and Blei (2019) that is unrelated to the regression discontinuity design. Also in the proof, what is x ′ ? Why does supervised learning recover the true effect? Minor: The supervised learning approach: This should be in bold instead of italics (to match the bold of other paragraphs in the manuscript). In equation (8), why are the weights allowed to depend also on y ? Would it suffice if they only depend on ( x , y ^ ) ? There is a typo in the statement of Assumption 2. Limitations Limitations of the framework are not discussed. It would be great if a paragraph could be added with possible directions for future work!
NIPS
Title Accelerating Sparse Convolution with Column Vector-Wise Sparsity Abstract Weight sparsity is a promising approach to reducing the model size and computation cost of convolutional neural networks (CNNs). Nevertheless, non-zero weights often distribute randomly in sparse CNN models, introducing enormous difficulty in obtaining actual speedup on common hardware (e.g., GPU) over their dense counterparts. Existing acceleration solutions either require hardware modifications for irregular memory access support or rely on a partially structured sparsity pattern. Neither of these methods is capable of achieving fruitful speedup on convolution layers. In this work, we propose an algorithm-software co-designed sparse convolution based on a novel out-vector-wise (OVW) sparse pattern. Building on the insight that vertical vector integrity can preserve continuous memory access in IM2COL, the OVW pattern treats a V × 1 vector as unit. To reduce the error caused by sparsity, we propose an equivalent transformation process, i.e., clustering-based channel permutation, to gather similar rows together. Experimental evaluations demonstrate that our method achieves a 1.7× and 3.2× speedup over the SOTA solution and the dense convolution of ResNet50 on NVIDIA V100 at 75% sparsity, respectively, with only negligible accuracy loss. Moreover, compared to the SOTA solution that achieves speedups only on data with 60% sparsity or more, our method begins to obtain speedups on data with only 10% sparsity. 1 Introduction Recently, convolutional neural networks (CNNs) have yielded astonishing results in many important domains such as vision [8], and language [18]. With CNN algorithm developing rapidly, CNN models’ storage and computing overhead grow exponentially. To significantly reduce both the computations and memory access, weight sparsity has been adopted as a promising approach to improve hardware efficiency. Despite the success in reducing computations and data access, unconstrained, fine-grained sparsity fails to bring practical speedups on common GPUs. This is because unstructured sparsity generally induces tremendous access conflicts and load unbalances, which lowers GPU’s performance. For example, on NVIDIA V100, the sparse matrix multiplication performs not faster than the dense matrix multiplication until the sparsity ratio is over 95% [17, 3]. Unfortunately, existing solutions either require hardware modifications or only partially address the problem by being constrained with structured, coarse-grained sparsity, resulting in high accuracy loss. The former is to leverage the sparse matrix-matrix multiplication (e.g., SPMM) operation ∗Corresponding author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). on GPU. While directly applying SPMM on sparse CNNs can run even slower than dense CNNs [17],productive SPMM acceleration solutions[21, 14] often require their unique need for dedicated hardware support to overcome discontinuous memory access, which is impractical. The latter is to leverage the general matrix-matrix multiplication (e.g., GEMM) operation on GPU. Recent works focus on structured sparsity with different sparse patterns to gain speedup benefits from weight sparsity. Block sparsity[4] manages to restore the spatial locality of matrices to a large extent, at the cost of a strict restriction on the non-zero Balanced sparsity[2, 19, 15], newly introduced on NVIDIA A100 GPU[14], however, lacks flexibility in choosing model sparsity rate that only exact 50% sparsity ratio could be deployed on this dedicated hardware. These efforts achieve some palpable acceleration compared to dense GEMM operation, but they all struggle to attain similar results on convolution layers which have proven to be a greater challenge. To tackle these problems, here we present a novel sparse convolution acceleration algorithm featured with column-wise sparsity and implicit matrix multiplication. Specifically, the proposed column-wise sparsity is dubbed the out-vector-wise (OVW) sparse pattern since the pattern sparsifies a matrix by treating a V×1 vector as an entirety, as shown in Figure 1. During convolution, the OVW pattern can hold both strong memory consistency and high data reuse rates of input matrices using implicit matrix multiplication. Moreover, we propose to employ channel permutation and row clustering to improve the accuracy of OVW sparse pattern-based CNNs. Besides, a GPU kernel is carefully designed to ensure that our OVW sparse pattern is supported by common GPUs. With these efforts, our algorithm predominantly outperforms other sparse convolution acceleration algorithms on various CNN models. More importantly, our algorithm can achieve the acceleration of convolutions even with a very low weight sparsity ratio, e.g., 10%. Instead, prior arts can only work fine when the weight sparsity ratio is over 60%. The main contributions of this paper are listed as follows: • We propose a vector-based sparsity pattern, i.e., the OVW pattern to balance inference accuracy loss and computation efficiency in a hardware-friendly manner. • We implement a new GPU convolution kernel to support the OVW pattern. The kernel utilizes the technique of extracting filter location information which can further reduce inference runtime. • We propose a heuristic clustering method to obtain an appropriate channel permutation for reducing accuracy drop during weight pruning. This channel permutation operation is conducted offline, which does not affect inference time. • Our GPU kernel can accelerate convolution at a wide range of model sparsity rates. With few accuracy loss, the kernel can speed up ResNet50 by 1.7× and 3.2×, respectively over the SOTA solution and the dense cuDNN convolution on NVIDIA V100 GPU at 75% sparsity level. 2 Related work 2.1 Software-only Acceleration For Sparse CNN Model Weight pruning has been a popular technique for efficient CNN inference. Early studies[7, 6] show that removing a large proportion of unimportant connections in CNN models does not necessarily lead to inference accuracy impairment. Reducing parameters helps exploit redundancy in CNN models, which requires fewer computations and data accesses. However, for CNN inference, weight pruned sparse CNNs usually perform worse than dense counterparts, unless the CNN sparsity ratios are substantial, i.e., very sparse CNNs. To address this issue, methods other than unstructured sparsity are exploited. Researchers exploit various constraints on sparsity patterns in exchange for computation efficiency. A primary domain of this region is filter pruning, where parameters of an entire filter are pruned or kept as a whole. However, this direct modification of channel size suffers a sharp accuracy drop [10, 11, 16]. Moderate sparsity patterns are also examined, such as block sparsity[17], which is proposed to elevate the spatial locality of sparse matrices. But this feature can achieve speedup only when sparsity ratios are larger than 70%. Tile-wise sparsity[5] endows weight patterns with more flexibility. Compared to previous methods, balanced sparsity[15, 19] is more feasible with recent support from NVIDIA A100 GPU which directly optimizes 2:4 balanced sparsity. Recent work Shfl_BW [9] uses matrix transformation to utilize block sparsity’s computation efficiency while removing some of its constraints. In this way, the threshold of weight sparsity ratio that enables acceleration is reduced from 70% to 60%. Different from prior works which only work for very sparse matrices, in this paper, our algorithm can achieve speedup when the sparsity ratio is only 10%. 2.2 GEMM Based Convolution GEMM has been adopted widely to perform convolution and it performs significantly better than other convolution methods such as FFT, and Winograd on modern commercial hardware accelerators such as TPUs and GPUs. The GEMM-based algorithms could be further divided into two types: explicit matrix multiplication and implicit matrix multiplication. Explicit matrix multiplication uses IM2COL to adapt inputs for GEMM. IM2COL is an IO-intensive operation, which brings in significant workload other than computation cost[1]. Implicit matrix multiplication merges these operations for more efficient memory accesses. It updates pointers of feature map in shared memory and performs tile-based matrix multiplication simultaneously. On NVIDIA V100 GPUs, explicit GEMM convolutions consume on average 120%, 126%, and 142% in time compared to implicit GEMM-based convolution on convolution layers of Alexnet, Resnet and Googlenet [20]. Yet, few studies have investigated the sparse convolution with implicit GEMM. Performing sparse convolution by GEMM is always through explicit matrix multiplication rather than implicit matrix multiplication. This is because of the IM2COL operation which is extremely difficult if not impossible for sparse matrix multiplication, as sparse matrices are compactly compressed and irregularly stored. As a result, implicit GEMM who does not have to suffer from the costly IM2COL operation has the potential to achieve higher efficiency for sprase convolution. In this paper, we investigate the implicit GEMM-based sparse convolution to leverage the high-performance GEMMs on existing hardware. 3 Accelerating Sparse Convolution In this section, we introduce our proposed sparse convolution algorithm, including the OVW pattern of sparsity for the proposed sparse convolution, its advantage in convolution computation and our detailed implementation on GPU. 3.1 The OVW Pattern The OVW pattern belongs to the vector-wise(VW) pattern which is one of the three different pattern categories of sparsity in matrix. As shown in Fig 1, the first sparsity pattern is the element-wise(EW) pattern, corresponding to unstructured pruning, which evaluates each parameter individually. Having imposed no constraint on pruning, this pattern succeeded at model flexibility but struggled at actual acceleration due to its irregular memory accesses. The second sparsity pattern is the VW pattern, which can be further divided into the inter-vector-wise(IVW) and the OVW pattern. They both treat a V×1 vector as an entirety while the IVW pattern prunes a certain proportion of weights inside each vector and the OVW pattern focuses on the entire vector of weights. The third pattern is the block-wise(BW) pattern, and its minimum pruning granularity is a V×V block. This pattern has the highest computation efficiency, but its inference accuracy loss is high as well. In this work, we use the OVW pattern since it shares the advantages of the VW pattern, which balances computation efficiency of BW and network accuracy of EW. The Shfl_BW pattern is actually a variant of this pattern, who uses an extra channel reordering procedure to gather block-wise pattern utilities. 3.2 The OVW Pattern’s Advantage in Convolution Computation The biggest advantage of the OVW pattern is that it fits the way an efficient dense warp-level GEMM instruction fetches input data. This instruction is the key contributor to most of the sparse matrix acceleration methods. The reasons are as followed. As shown in Fig 2, the OVW pattern-based sparse convolution can be broken down into multiple dense matrix multiplications of smaller sizes. During the loading process of a dense convolution procedure, a column of filter data loaded into shared memory shares a specific position on the filter map. In the meantime, a continuous block in the feature map is loaded accordingly to prepare convolution. Several columns fetched from the filters together form the left input matrix of the block matrix multiplication and their corresponding feature data blocks form the right input matrix. Noticing that this forming process of input matrix does not require the loaded filter columns to be continuous, meaning that efficient dense operations can also be performed by grouping some unrelated columns. Based on this observation, we could take in multiple columns of irrelevant column indices from the OVW pattern sparse matrix and handle them in the same way as the dense GEMM operation. This similarity between our convolution algorithm and the implicit GEMM convolution guarantees us similar overall computation efficiency. What’s more, other sparsity such as the Shfl_BW pattern who is actually a variant of this pattern, uses an extra channel reordering procedure to gather block-wise pattern utilities. The OVW pattern, however, could be directly used in our convolution algorithm which denotes a higher acceleration potential. Compared to N:M sparsity,one of the IVW pattern, our approach does not need specialized hardware supports and it is much more elastic in selecting the sparsity ratio of each layer. Besides, the IVW pattern still faces the memory-bound issue, because the amount of redundant data that needs to be loaded into shared memory each time is equal to its sparsity ratio. 3.3 GPU Sparse Kernel Implementation As shown in Algorithm 1, our convolution kernel implementation contains three steps. The first step is to get the corresponding feature pointer offsets through original filter structure information recovery. These parts of calculation are done by the function Cal_Thread_Offset. The second step is to load data from both input matrices into shared memory. Some threads use the function load_column to load a column vector of length TM from filters to shared memory with DY threads. Threads then use the function load_row to load a row vector from the feature map in the same way. The third step calls warp matrix multiplication operators. When calculating matrix AM×K × BK×N = CM×N , a three dimension parallelism(DX,DY,DZ) thread is employed. (DX,DY,DZ) threads loop along (M,N,K) individually. Several threads together use the function Warp_MMA to multiply the loaded matrices and write back after results accumulation is finished. Each thread computes a tile matrix multiplication of the size (TM, TN, TK). The procedure of calculating the exact pointer offset of the feature map for corresponding filter columns contains two steps. During a convolution computation, the corresponding location of aij × bjk is not obvious. Hence after loading aij into shared memory, firstly, the GPU kernel has to fetch the column indices of aij in the original filter. The column indices are then used to recover the exact position of this column in the filter map. Subsequently, the location offset of the corresponding data in the feature map is calculated, after which bjk can be finally located in the feature map. Algorithm 1: Sparse convolution computation Data: row_idx[], filter[], input[] Result: output[] 1 Shared memory A[TM ][TK], B[TK][TN ], C[TM ][TN ]; 2 for Thread idx=1 to DX, idy=1 to DY, idz=1 to DZ do 3 offset = Cal_Thread_Offset(row_idx[], idx, idy, idz); 4 if idx < TK then 5 Load_column(A, filter[idz][idx], TM , idy); 6 end 7 if idy < TK then 8 Load_row(B, input[offset], TN , idx); 9 end 10 Syncthreads(); 11 Warp_MMA(A, B, idx, idy); 12 Accumulate_Results(C); 13 Store(output, C); 14 end 15 Return output; If only the column indices of sparse matrices are stored, their location information has to be recovered each time before the corresponding activation is loaded. Like the location offset in the feature map, the corresponding data location in the filter map could be prepared in advance. Because it is a constant for each thread during the whole process. Extra storage occupation of this technique is two extra dimension indices data array of filter map, which takes merely 3% total storage of a compressed model with vector length=64, and 6% with vector length=32, in exchange for 10% run time reduction of Resnet50’s convolution layers on average. Considering that a sparse model is already highly compressed, this additional model redundancy is totally acceptable. 4 Pruning Algorithm In this section, we introduce our pruning algorithm for the OVW pattern, including the channel permutation technique and our method to acquire a desired permutation order for it. 4.1 Channel Permutation Our pruning method can be divided into two steps: shuffling the filter matrix rows in each layer and applying vector-wise pruning. Here we will explain why filter permutation will do no harm to the network inference. Matrix multiplication only swaps the order in the output dimension and does not change the actual computation. Permuted operation results can be recovered through a reversed permutation of the operand output. As we only permute the output channel of each layer, the permuted order of the current layer will be absorbed by the input channel dimension of the next GEMM-based layer(convolution or linear). Fig 3 shows one iteration for channel permutation between layerk and layerk+1. As we have reordered the output channel of layerk, the activationk is changed to the same order, but after we permute the input channel of layerk+1, activationk+1 is restored. More weights value could be saved after permutation. The same operation is then repeated on layerk+1 and so on until every GEMM layer in the network is permuted. This permutation transfer procedure allows us to choose an appropriate permutation row order for every filter without altering the output of the network. The rest of the layers such as the pooling layers and the activation layers involve no modification along channel dimension thus are not affected by this process. The BN layers and the bias added at the end of convolution layers and linear layers do not produce any new permutation order, but they have to permute according to the permutation passing through. 4.2 Row Clustering Algorithm 2: Row clustering Data: The original weight W , number of clusters k, number of selected column m. Result: The reordered weight RW . 1 RW = empty; 2 while W is not empty do 3 Sort the columns of W by column variance; 4 Build SampleW by selecting columns with the top-m largest variances; 5 Get the k clustered groups G = Balanced_kmeans(SampleW , k); 6 Select the group g with maximum sum; 7 Append g to RW ; 8 Remove g from W ; 9 end 10 Return RW ; We use a row clustering method to obtain an appropriate permutation order. A heuristic indicator evaluating the quality of a permutation is the sum of absolute weight value being pruned, assuming that the method that the preservation of more important weights corresponds to less inference accuracy loss. An obvious route is that the weight rows with shorter distances are assigned to the same group as much as possible. Shfl_BW chooses the kmeans method for clustering, but after careful experiment, we found that the kmeans method does not suit this issue well. For starters, the number of elements in each group is set to be a fixed value (vector length), and kmeans requires additional operations to meet this demand. Also, the data dimension(input channel multiplies filter height and width) is too large, while the amount of data and groups is relatively small. Kmeans falls in local minima easily and the output cluster is extremely unstable. We introduced the balanced kmeans[13] to solve it and modified it to palliate both symptoms mentioned above. Algorithm 2 shows the key steps of our algorithm. First, we construct a characteristic matrix by assembling rows with the highest variance, then cluster rows of this matrix to alleviate excessive dimension. We utilize balanced kmeans to get equal size clusters. In each iteration of balanced kmeans, instead of assigning each data vector to its nearest cluster center as the origin kmeans algorithm, a distance matrix between all the vectors and the current cluster centers is formed. We minimize the sum of distance while each cluster contains the same amount of data vector. This minimization problem can be converted to bipartite matching and we employ the Kuhn-Munkres algorithm to solve it. Secondly, for each clustering result, we only adopt the most important group under this feature to increase operation stability. This group is removed from the original matrix and then the feature matrix is reconstructed and clustered. Repeat the above steps until all data grouping completes, the permuted matrix and its permuting order is obtained. 5 Evaluation 5.1 Model Accuracy We evaluate our method on several popular CNN models on NVIDIA V100 GPU. We only calculate the speedup of the convolution layers in the following results. Table 1 shows the accuracy of our method compared to unstructured sparsity where V stands for vector length. “OVW permuted” shows a better accuracy over “OVW non-permuted” on all CNN models. The upper bound of V in our kernel implementation is 64 and we tend to make it as large as possible to maximize shared memory usage. However, group convolution has no filter data reusage for vector length larger than its group size. Similarly, V is set to 1 in depthwise convolution layers. Besides that, the first convolution layer of SqueezeNet has only 96 output channels. If the vector length is 64, the second tile in the output channel dimension only processes 32 rows which is only half loaded. In this case, vector length is set to 32 to maximize computation resource utilization. The V selection strategy here is to maximize V while utilizing shared memory resources as much as possible. All the results in this table use the same fine-tuning process. We fine-tune each network for 40 epochs after pruning with the same learning rate of 0.0008. Also, each layer can hold different sparsity ratio credit to our acceleration for convolution at low sparsity ratio. 5.2 Convolution Kernel Speedup As shown in Fig 4, we evaluate the speedup of our method on three popular CNN models. We use the cuDNN convolution operator as the dense baseline. The first three graphs in Fig 4 represent three typical convolution shapes in CNN models: small channel size with a large feature map, medium channel size with a medium feature map, and large channel size with a small feature map. Our kernel can accelerate all these types of convolution layers while exceeds at the twelfth convolution layer of Resnet50 which is the most used kind of convolution layer 4.8× and 3.87× over cuDNN on V100 at 80% sparsity, vector lengths 64 and 32. 5.3 Comparing Different Sparsity Patterns We replicate two vector-level sparsity pattern, balanced sparsity(NVIDIA 2:4)[15] and Shfl_BW [9] for comparison. Other vector-level sparsity patterns such as Tile-wise[5] are slower than the former two patterns. Also, these patterns lack implementation for convolution. Table 2 shows results of Resnet50 on ImageNet directly copied from Shfl_BW paper, where an expensive method Grow and Prune[12] is used to recover its accuracy. Grow and Prune is a sparsity pattern independent method. We fine-tune our pretrained Resnet50 model for 20 epochs with the learning rate of 0.001. We lower the sparsity of our network to 70% where our method demonstrates 73.35% top1 accuracy and 2.79× speedup. Our method exhibits an obviously better speed-accuracy tradeoff compared to the Shfl_BW. The OVW pattern also achieves a better speedup compared to balanced sparsity while recovering the full accuracy of the original model. To ensure fair comparison, we reproduce these results under the same setting on Resnet50 and Cifar100, as shown in Fig 5. We fine-tune each network for 40 epochs after pruning from pretrained dense models with the same learning rate of 0.0008. The OVW pattern dominates the speed-accuracy trade-off in vector-level sparsity. 6 Conclusion Accelerating sparse convolution poses a greater challenge than accelerating sparse matrix multiplication. In this work, we propose a novel sparsity pattern, the OVW pattern to facilitate the sparse convolution acceleration under intact accuracy. Although, limitations exist that our method relies heavily on the hardware supports for the implicit GEMM convolution algorithm. The performance of our base dense kernel against the commodity unpublished ones is unstable. Our method does not acquire the same amount of acceleration rate on matrix multiplication and the acceleration rate of our method is subject to filter shape. Its performance also degrades in specialized convolution layers where data reusage opportunity is limited. In consideration of this, our GPU implementation still largely outperforms all sparse acceleration approaches exceedingly with a sparse pattern of similar flexibility. We hope this work can fill the vacancy of specialized sparse convolution kernel design and our methodology can inspire further research in this domain. Acknowledgement We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research. This work is partially supported by the National Key R&D Program of China(under Grant 2017YFA0700902), the NSF of China(under Grants 61925208, 61732020, U19B2019), Strategic Priority Research Program of Chinese Academy of Science (XDB32050200), Beijing Academy of Artificial Intelligence (BAAI), CAS Project for Young Scientists in Basic Research(YSBR-029), Youth Innovation Promotion Association CAS and Xplore Prize.
1. What is the main contribution of the paper regarding sparse convolutional kernels? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its outer-vector-wise sparsity and implicit matrix multiplication? 3. How does the reviewer assess the presentation and reproducibility of the paper? 4. What are some specific suggestions for improving the writing, illustrations, and experimental results? 5. Is there any concern or question regarding the paper's focus on accelerating sparse convolutions? 6. Are there any limitations of the method that the authors have not addressed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper introduces a new sparse convolution kernel, with the following properties: The filters are "outer-vector-wise" sparse. This means that the matrix of flattened filters has its columns split into vectors, and each vector is either zero or non-zero. This allows the kernel to use implicit matrix multiplication. (In explicit matrix multiplication the im2col operation is used to extract patches of the features map, which are then used in a batched matrix multiply. In implicit matrix multiplication, pointers into the feature map are used instead of explicitly extracting the patches. This is faster.) When compressing a dense network, there are two ways that the sparsity pattern can be optimized: The sparsity pattern can be chosen (i.e., which vectors are zero and non-zero, under the constraint that each column of the flattened filter matrix must have the same number) The rows of the flattened filter matrix can be permuted (i.e., the output channels can be permuted), since this doesn't actually change the computation (as long as the next layer's input channels are permuted correspondingly). The authors propose a row clustering algorithm which uses k-means to find a permutation of the rows that maximizes the weights of the unpruned entries. The results show that this sparsity kernel can reach significant speedups at relatively low sparsity levels. The experiments also show that the test accuracy remains relatively high after pruning and fine-tuning. Lastly, it's shown that the permutations provide a small but consistent improvement in the test accuracy compared to not permuting. Strengths And Weaknesses I think this paper proposes a very neat sparse convolutional kernel. The outer-vector-wise sparsity seems like a good trade-off between fast structured sparsity (e.g., block-wise sparsity) and flexibility (e.g., unstructured sparsity). This works nicely with the implicit matrix multiplication approach, which gives impressive experimental results. The main weakness of this paper is the presentation and reproducibility. The writing is not good, which makes it more difficult to follow along than it should be, and the pseudo-algorithms seem to be missing important details required to reproduce their results. Similarly, the figures are not as elucidating as they could be, and the experimental results are presented in a way that is difficult to parse. All in all, the amount of editing that needs to be done to bring this paper to a publishable state is on the borderline of what is reasonable to expect from a conference rebuttal cycle. This makes it difficult to recommend this paper as is. Questions Writing Overall the text requires a significant amount of editing. For example, the abstract contains typos ("re[p]ly on") and grammar mistakes such as incorrect plurals/singulars ("pattern[s]", "speedup[s]"), missing articles ("[the] OVW pattern"), and incorrect conjugations ("achieve[s]"). I found the writing to be meandering and unstructured throughout. As a reader it was often unclear to me what the main point of each section was. I suggest the authors spend some time editing the text for both grammar and structure. The text could also be more precise to help ensure reproducibility. Section 4.2 is a particular example of this, with vagues statements ("kmeans requires additional operations to meet this demand") and a lack of precision, e.g., "We employ Kuhn-Munkres algorithm to solve the distance matrix and optimize the mean square error of it". Which distance matrix? Mean square error of what? None of this seems to appear in algorithm 2. Illustrations I believe figure 1's illustration of "inter-vector-wise" sparsity contains a small mistake (the two top right vectors have 3 instead of 2 unpruned entries) Figure 2 is confusing to me: The diagram makes it look as if the sparse filter is first densified to then be passed to load_columns, the output of which seems the same as the original input data? There's a function recover which seems to just be reshape? And the figure suggests that the feature offsets are a function of the filters' values, whereas they are actually computed based on the row indices (row_idx). Experimental results I believe it would be valuable to perform multiple runs and provide confidence intervals on the presented results, given the small margins (e.g., 65.49 and 65.46 for SqueezeNet OVW permuted/non-permuted). The "V" in the table of results should probably be explained in the caption. I assume it's the vector length as in the text, but what does it mean for the vector length to be 44.48 or 32.02? And perhaps it is better to move this column to the end of the table, to make it easier to compare the OVW columns to the baseline columns. The results are also a bit hard to parse since they require the reader to compare numbers across 6 columns to see which ones are bigger or smaller. Perhaps the authors could report "drop/increase in accuracy compared to baseline" instead? And perhaps the columns are better grouped by their sparsity level rather than by the sparsity type. Training This paper assumes a setting of dense training followed by compression for deployment. I would be interested to see the authors discuss using these sparse kernels during training as well. Title The title of the paper is overly general. Many papers discuss accelerating sparse convolutions after all. Perhaps something along the lines of "Column-wise sparsity and implicit matrix multiplication for efficient sparse convolutions" would be more descriptive of the content of the paper. Limitations The authors claim to have discussed the limitations of their method, but it's not clear to me where they did so.
NIPS
Title Accelerating Sparse Convolution with Column Vector-Wise Sparsity Abstract Weight sparsity is a promising approach to reducing the model size and computation cost of convolutional neural networks (CNNs). Nevertheless, non-zero weights often distribute randomly in sparse CNN models, introducing enormous difficulty in obtaining actual speedup on common hardware (e.g., GPU) over their dense counterparts. Existing acceleration solutions either require hardware modifications for irregular memory access support or rely on a partially structured sparsity pattern. Neither of these methods is capable of achieving fruitful speedup on convolution layers. In this work, we propose an algorithm-software co-designed sparse convolution based on a novel out-vector-wise (OVW) sparse pattern. Building on the insight that vertical vector integrity can preserve continuous memory access in IM2COL, the OVW pattern treats a V × 1 vector as unit. To reduce the error caused by sparsity, we propose an equivalent transformation process, i.e., clustering-based channel permutation, to gather similar rows together. Experimental evaluations demonstrate that our method achieves a 1.7× and 3.2× speedup over the SOTA solution and the dense convolution of ResNet50 on NVIDIA V100 at 75% sparsity, respectively, with only negligible accuracy loss. Moreover, compared to the SOTA solution that achieves speedups only on data with 60% sparsity or more, our method begins to obtain speedups on data with only 10% sparsity. 1 Introduction Recently, convolutional neural networks (CNNs) have yielded astonishing results in many important domains such as vision [8], and language [18]. With CNN algorithm developing rapidly, CNN models’ storage and computing overhead grow exponentially. To significantly reduce both the computations and memory access, weight sparsity has been adopted as a promising approach to improve hardware efficiency. Despite the success in reducing computations and data access, unconstrained, fine-grained sparsity fails to bring practical speedups on common GPUs. This is because unstructured sparsity generally induces tremendous access conflicts and load unbalances, which lowers GPU’s performance. For example, on NVIDIA V100, the sparse matrix multiplication performs not faster than the dense matrix multiplication until the sparsity ratio is over 95% [17, 3]. Unfortunately, existing solutions either require hardware modifications or only partially address the problem by being constrained with structured, coarse-grained sparsity, resulting in high accuracy loss. The former is to leverage the sparse matrix-matrix multiplication (e.g., SPMM) operation ∗Corresponding author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). on GPU. While directly applying SPMM on sparse CNNs can run even slower than dense CNNs [17],productive SPMM acceleration solutions[21, 14] often require their unique need for dedicated hardware support to overcome discontinuous memory access, which is impractical. The latter is to leverage the general matrix-matrix multiplication (e.g., GEMM) operation on GPU. Recent works focus on structured sparsity with different sparse patterns to gain speedup benefits from weight sparsity. Block sparsity[4] manages to restore the spatial locality of matrices to a large extent, at the cost of a strict restriction on the non-zero Balanced sparsity[2, 19, 15], newly introduced on NVIDIA A100 GPU[14], however, lacks flexibility in choosing model sparsity rate that only exact 50% sparsity ratio could be deployed on this dedicated hardware. These efforts achieve some palpable acceleration compared to dense GEMM operation, but they all struggle to attain similar results on convolution layers which have proven to be a greater challenge. To tackle these problems, here we present a novel sparse convolution acceleration algorithm featured with column-wise sparsity and implicit matrix multiplication. Specifically, the proposed column-wise sparsity is dubbed the out-vector-wise (OVW) sparse pattern since the pattern sparsifies a matrix by treating a V×1 vector as an entirety, as shown in Figure 1. During convolution, the OVW pattern can hold both strong memory consistency and high data reuse rates of input matrices using implicit matrix multiplication. Moreover, we propose to employ channel permutation and row clustering to improve the accuracy of OVW sparse pattern-based CNNs. Besides, a GPU kernel is carefully designed to ensure that our OVW sparse pattern is supported by common GPUs. With these efforts, our algorithm predominantly outperforms other sparse convolution acceleration algorithms on various CNN models. More importantly, our algorithm can achieve the acceleration of convolutions even with a very low weight sparsity ratio, e.g., 10%. Instead, prior arts can only work fine when the weight sparsity ratio is over 60%. The main contributions of this paper are listed as follows: • We propose a vector-based sparsity pattern, i.e., the OVW pattern to balance inference accuracy loss and computation efficiency in a hardware-friendly manner. • We implement a new GPU convolution kernel to support the OVW pattern. The kernel utilizes the technique of extracting filter location information which can further reduce inference runtime. • We propose a heuristic clustering method to obtain an appropriate channel permutation for reducing accuracy drop during weight pruning. This channel permutation operation is conducted offline, which does not affect inference time. • Our GPU kernel can accelerate convolution at a wide range of model sparsity rates. With few accuracy loss, the kernel can speed up ResNet50 by 1.7× and 3.2×, respectively over the SOTA solution and the dense cuDNN convolution on NVIDIA V100 GPU at 75% sparsity level. 2 Related work 2.1 Software-only Acceleration For Sparse CNN Model Weight pruning has been a popular technique for efficient CNN inference. Early studies[7, 6] show that removing a large proportion of unimportant connections in CNN models does not necessarily lead to inference accuracy impairment. Reducing parameters helps exploit redundancy in CNN models, which requires fewer computations and data accesses. However, for CNN inference, weight pruned sparse CNNs usually perform worse than dense counterparts, unless the CNN sparsity ratios are substantial, i.e., very sparse CNNs. To address this issue, methods other than unstructured sparsity are exploited. Researchers exploit various constraints on sparsity patterns in exchange for computation efficiency. A primary domain of this region is filter pruning, where parameters of an entire filter are pruned or kept as a whole. However, this direct modification of channel size suffers a sharp accuracy drop [10, 11, 16]. Moderate sparsity patterns are also examined, such as block sparsity[17], which is proposed to elevate the spatial locality of sparse matrices. But this feature can achieve speedup only when sparsity ratios are larger than 70%. Tile-wise sparsity[5] endows weight patterns with more flexibility. Compared to previous methods, balanced sparsity[15, 19] is more feasible with recent support from NVIDIA A100 GPU which directly optimizes 2:4 balanced sparsity. Recent work Shfl_BW [9] uses matrix transformation to utilize block sparsity’s computation efficiency while removing some of its constraints. In this way, the threshold of weight sparsity ratio that enables acceleration is reduced from 70% to 60%. Different from prior works which only work for very sparse matrices, in this paper, our algorithm can achieve speedup when the sparsity ratio is only 10%. 2.2 GEMM Based Convolution GEMM has been adopted widely to perform convolution and it performs significantly better than other convolution methods such as FFT, and Winograd on modern commercial hardware accelerators such as TPUs and GPUs. The GEMM-based algorithms could be further divided into two types: explicit matrix multiplication and implicit matrix multiplication. Explicit matrix multiplication uses IM2COL to adapt inputs for GEMM. IM2COL is an IO-intensive operation, which brings in significant workload other than computation cost[1]. Implicit matrix multiplication merges these operations for more efficient memory accesses. It updates pointers of feature map in shared memory and performs tile-based matrix multiplication simultaneously. On NVIDIA V100 GPUs, explicit GEMM convolutions consume on average 120%, 126%, and 142% in time compared to implicit GEMM-based convolution on convolution layers of Alexnet, Resnet and Googlenet [20]. Yet, few studies have investigated the sparse convolution with implicit GEMM. Performing sparse convolution by GEMM is always through explicit matrix multiplication rather than implicit matrix multiplication. This is because of the IM2COL operation which is extremely difficult if not impossible for sparse matrix multiplication, as sparse matrices are compactly compressed and irregularly stored. As a result, implicit GEMM who does not have to suffer from the costly IM2COL operation has the potential to achieve higher efficiency for sprase convolution. In this paper, we investigate the implicit GEMM-based sparse convolution to leverage the high-performance GEMMs on existing hardware. 3 Accelerating Sparse Convolution In this section, we introduce our proposed sparse convolution algorithm, including the OVW pattern of sparsity for the proposed sparse convolution, its advantage in convolution computation and our detailed implementation on GPU. 3.1 The OVW Pattern The OVW pattern belongs to the vector-wise(VW) pattern which is one of the three different pattern categories of sparsity in matrix. As shown in Fig 1, the first sparsity pattern is the element-wise(EW) pattern, corresponding to unstructured pruning, which evaluates each parameter individually. Having imposed no constraint on pruning, this pattern succeeded at model flexibility but struggled at actual acceleration due to its irregular memory accesses. The second sparsity pattern is the VW pattern, which can be further divided into the inter-vector-wise(IVW) and the OVW pattern. They both treat a V×1 vector as an entirety while the IVW pattern prunes a certain proportion of weights inside each vector and the OVW pattern focuses on the entire vector of weights. The third pattern is the block-wise(BW) pattern, and its minimum pruning granularity is a V×V block. This pattern has the highest computation efficiency, but its inference accuracy loss is high as well. In this work, we use the OVW pattern since it shares the advantages of the VW pattern, which balances computation efficiency of BW and network accuracy of EW. The Shfl_BW pattern is actually a variant of this pattern, who uses an extra channel reordering procedure to gather block-wise pattern utilities. 3.2 The OVW Pattern’s Advantage in Convolution Computation The biggest advantage of the OVW pattern is that it fits the way an efficient dense warp-level GEMM instruction fetches input data. This instruction is the key contributor to most of the sparse matrix acceleration methods. The reasons are as followed. As shown in Fig 2, the OVW pattern-based sparse convolution can be broken down into multiple dense matrix multiplications of smaller sizes. During the loading process of a dense convolution procedure, a column of filter data loaded into shared memory shares a specific position on the filter map. In the meantime, a continuous block in the feature map is loaded accordingly to prepare convolution. Several columns fetched from the filters together form the left input matrix of the block matrix multiplication and their corresponding feature data blocks form the right input matrix. Noticing that this forming process of input matrix does not require the loaded filter columns to be continuous, meaning that efficient dense operations can also be performed by grouping some unrelated columns. Based on this observation, we could take in multiple columns of irrelevant column indices from the OVW pattern sparse matrix and handle them in the same way as the dense GEMM operation. This similarity between our convolution algorithm and the implicit GEMM convolution guarantees us similar overall computation efficiency. What’s more, other sparsity such as the Shfl_BW pattern who is actually a variant of this pattern, uses an extra channel reordering procedure to gather block-wise pattern utilities. The OVW pattern, however, could be directly used in our convolution algorithm which denotes a higher acceleration potential. Compared to N:M sparsity,one of the IVW pattern, our approach does not need specialized hardware supports and it is much more elastic in selecting the sparsity ratio of each layer. Besides, the IVW pattern still faces the memory-bound issue, because the amount of redundant data that needs to be loaded into shared memory each time is equal to its sparsity ratio. 3.3 GPU Sparse Kernel Implementation As shown in Algorithm 1, our convolution kernel implementation contains three steps. The first step is to get the corresponding feature pointer offsets through original filter structure information recovery. These parts of calculation are done by the function Cal_Thread_Offset. The second step is to load data from both input matrices into shared memory. Some threads use the function load_column to load a column vector of length TM from filters to shared memory with DY threads. Threads then use the function load_row to load a row vector from the feature map in the same way. The third step calls warp matrix multiplication operators. When calculating matrix AM×K × BK×N = CM×N , a three dimension parallelism(DX,DY,DZ) thread is employed. (DX,DY,DZ) threads loop along (M,N,K) individually. Several threads together use the function Warp_MMA to multiply the loaded matrices and write back after results accumulation is finished. Each thread computes a tile matrix multiplication of the size (TM, TN, TK). The procedure of calculating the exact pointer offset of the feature map for corresponding filter columns contains two steps. During a convolution computation, the corresponding location of aij × bjk is not obvious. Hence after loading aij into shared memory, firstly, the GPU kernel has to fetch the column indices of aij in the original filter. The column indices are then used to recover the exact position of this column in the filter map. Subsequently, the location offset of the corresponding data in the feature map is calculated, after which bjk can be finally located in the feature map. Algorithm 1: Sparse convolution computation Data: row_idx[], filter[], input[] Result: output[] 1 Shared memory A[TM ][TK], B[TK][TN ], C[TM ][TN ]; 2 for Thread idx=1 to DX, idy=1 to DY, idz=1 to DZ do 3 offset = Cal_Thread_Offset(row_idx[], idx, idy, idz); 4 if idx < TK then 5 Load_column(A, filter[idz][idx], TM , idy); 6 end 7 if idy < TK then 8 Load_row(B, input[offset], TN , idx); 9 end 10 Syncthreads(); 11 Warp_MMA(A, B, idx, idy); 12 Accumulate_Results(C); 13 Store(output, C); 14 end 15 Return output; If only the column indices of sparse matrices are stored, their location information has to be recovered each time before the corresponding activation is loaded. Like the location offset in the feature map, the corresponding data location in the filter map could be prepared in advance. Because it is a constant for each thread during the whole process. Extra storage occupation of this technique is two extra dimension indices data array of filter map, which takes merely 3% total storage of a compressed model with vector length=64, and 6% with vector length=32, in exchange for 10% run time reduction of Resnet50’s convolution layers on average. Considering that a sparse model is already highly compressed, this additional model redundancy is totally acceptable. 4 Pruning Algorithm In this section, we introduce our pruning algorithm for the OVW pattern, including the channel permutation technique and our method to acquire a desired permutation order for it. 4.1 Channel Permutation Our pruning method can be divided into two steps: shuffling the filter matrix rows in each layer and applying vector-wise pruning. Here we will explain why filter permutation will do no harm to the network inference. Matrix multiplication only swaps the order in the output dimension and does not change the actual computation. Permuted operation results can be recovered through a reversed permutation of the operand output. As we only permute the output channel of each layer, the permuted order of the current layer will be absorbed by the input channel dimension of the next GEMM-based layer(convolution or linear). Fig 3 shows one iteration for channel permutation between layerk and layerk+1. As we have reordered the output channel of layerk, the activationk is changed to the same order, but after we permute the input channel of layerk+1, activationk+1 is restored. More weights value could be saved after permutation. The same operation is then repeated on layerk+1 and so on until every GEMM layer in the network is permuted. This permutation transfer procedure allows us to choose an appropriate permutation row order for every filter without altering the output of the network. The rest of the layers such as the pooling layers and the activation layers involve no modification along channel dimension thus are not affected by this process. The BN layers and the bias added at the end of convolution layers and linear layers do not produce any new permutation order, but they have to permute according to the permutation passing through. 4.2 Row Clustering Algorithm 2: Row clustering Data: The original weight W , number of clusters k, number of selected column m. Result: The reordered weight RW . 1 RW = empty; 2 while W is not empty do 3 Sort the columns of W by column variance; 4 Build SampleW by selecting columns with the top-m largest variances; 5 Get the k clustered groups G = Balanced_kmeans(SampleW , k); 6 Select the group g with maximum sum; 7 Append g to RW ; 8 Remove g from W ; 9 end 10 Return RW ; We use a row clustering method to obtain an appropriate permutation order. A heuristic indicator evaluating the quality of a permutation is the sum of absolute weight value being pruned, assuming that the method that the preservation of more important weights corresponds to less inference accuracy loss. An obvious route is that the weight rows with shorter distances are assigned to the same group as much as possible. Shfl_BW chooses the kmeans method for clustering, but after careful experiment, we found that the kmeans method does not suit this issue well. For starters, the number of elements in each group is set to be a fixed value (vector length), and kmeans requires additional operations to meet this demand. Also, the data dimension(input channel multiplies filter height and width) is too large, while the amount of data and groups is relatively small. Kmeans falls in local minima easily and the output cluster is extremely unstable. We introduced the balanced kmeans[13] to solve it and modified it to palliate both symptoms mentioned above. Algorithm 2 shows the key steps of our algorithm. First, we construct a characteristic matrix by assembling rows with the highest variance, then cluster rows of this matrix to alleviate excessive dimension. We utilize balanced kmeans to get equal size clusters. In each iteration of balanced kmeans, instead of assigning each data vector to its nearest cluster center as the origin kmeans algorithm, a distance matrix between all the vectors and the current cluster centers is formed. We minimize the sum of distance while each cluster contains the same amount of data vector. This minimization problem can be converted to bipartite matching and we employ the Kuhn-Munkres algorithm to solve it. Secondly, for each clustering result, we only adopt the most important group under this feature to increase operation stability. This group is removed from the original matrix and then the feature matrix is reconstructed and clustered. Repeat the above steps until all data grouping completes, the permuted matrix and its permuting order is obtained. 5 Evaluation 5.1 Model Accuracy We evaluate our method on several popular CNN models on NVIDIA V100 GPU. We only calculate the speedup of the convolution layers in the following results. Table 1 shows the accuracy of our method compared to unstructured sparsity where V stands for vector length. “OVW permuted” shows a better accuracy over “OVW non-permuted” on all CNN models. The upper bound of V in our kernel implementation is 64 and we tend to make it as large as possible to maximize shared memory usage. However, group convolution has no filter data reusage for vector length larger than its group size. Similarly, V is set to 1 in depthwise convolution layers. Besides that, the first convolution layer of SqueezeNet has only 96 output channels. If the vector length is 64, the second tile in the output channel dimension only processes 32 rows which is only half loaded. In this case, vector length is set to 32 to maximize computation resource utilization. The V selection strategy here is to maximize V while utilizing shared memory resources as much as possible. All the results in this table use the same fine-tuning process. We fine-tune each network for 40 epochs after pruning with the same learning rate of 0.0008. Also, each layer can hold different sparsity ratio credit to our acceleration for convolution at low sparsity ratio. 5.2 Convolution Kernel Speedup As shown in Fig 4, we evaluate the speedup of our method on three popular CNN models. We use the cuDNN convolution operator as the dense baseline. The first three graphs in Fig 4 represent three typical convolution shapes in CNN models: small channel size with a large feature map, medium channel size with a medium feature map, and large channel size with a small feature map. Our kernel can accelerate all these types of convolution layers while exceeds at the twelfth convolution layer of Resnet50 which is the most used kind of convolution layer 4.8× and 3.87× over cuDNN on V100 at 80% sparsity, vector lengths 64 and 32. 5.3 Comparing Different Sparsity Patterns We replicate two vector-level sparsity pattern, balanced sparsity(NVIDIA 2:4)[15] and Shfl_BW [9] for comparison. Other vector-level sparsity patterns such as Tile-wise[5] are slower than the former two patterns. Also, these patterns lack implementation for convolution. Table 2 shows results of Resnet50 on ImageNet directly copied from Shfl_BW paper, where an expensive method Grow and Prune[12] is used to recover its accuracy. Grow and Prune is a sparsity pattern independent method. We fine-tune our pretrained Resnet50 model for 20 epochs with the learning rate of 0.001. We lower the sparsity of our network to 70% where our method demonstrates 73.35% top1 accuracy and 2.79× speedup. Our method exhibits an obviously better speed-accuracy tradeoff compared to the Shfl_BW. The OVW pattern also achieves a better speedup compared to balanced sparsity while recovering the full accuracy of the original model. To ensure fair comparison, we reproduce these results under the same setting on Resnet50 and Cifar100, as shown in Fig 5. We fine-tune each network for 40 epochs after pruning from pretrained dense models with the same learning rate of 0.0008. The OVW pattern dominates the speed-accuracy trade-off in vector-level sparsity. 6 Conclusion Accelerating sparse convolution poses a greater challenge than accelerating sparse matrix multiplication. In this work, we propose a novel sparsity pattern, the OVW pattern to facilitate the sparse convolution acceleration under intact accuracy. Although, limitations exist that our method relies heavily on the hardware supports for the implicit GEMM convolution algorithm. The performance of our base dense kernel against the commodity unpublished ones is unstable. Our method does not acquire the same amount of acceleration rate on matrix multiplication and the acceleration rate of our method is subject to filter shape. Its performance also degrades in specialized convolution layers where data reusage opportunity is limited. In consideration of this, our GPU implementation still largely outperforms all sparse acceleration approaches exceedingly with a sparse pattern of similar flexibility. We hope this work can fill the vacancy of specialized sparse convolution kernel design and our methodology can inspire further research in this domain. Acknowledgement We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research. This work is partially supported by the National Key R&D Program of China(under Grant 2017YFA0700902), the NSF of China(under Grants 61925208, 61732020, U19B2019), Strategic Priority Research Program of Chinese Academy of Science (XDB32050200), Beijing Academy of Artificial Intelligence (BAAI), CAS Project for Young Scientists in Basic Research(YSBR-029), Youth Innovation Promotion Association CAS and Xplore Prize.
1. What is the focus and contribution of the paper regarding sparse convolutional neural networks? 2. What are the strengths of the proposed approach, particularly in terms of efficiency and technical soundness? 3. What are the weaknesses of the paper, especially regarding the experiments and writing quality? 4. Do you have any concerns or suggestions regarding the vector-based sparsity pattern (OVW)? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper aims to speed up sparse convolutional neural networks on GPU devices. Instead of unstructured sparsity pattern, this paper proposes a vector-based sparsity pattern (OVW) to balance accuracy loss and computation efficiency. Based on the OVW pattern, a heuristic clustering method is introduced to obtain an appropriate channel permutation with smaller accuracy loss. Moreover, an efficient CUDA kernel is implemented for OVW-based sparse convolution. Extensive experiments on several network architectures demonstrate the efficiency of the method. Strengths And Weaknesses Strengths: This work is clearly motivated. The unstructured sparsity is hard to be accelerated on GPUs, while channel-wise sparsity tends to have a large accuracy drop. The sparsity pattern between these two would have a better trade-off. The proposed vector-based sparsity pattern (OVW) is reasonable and technically sound. It can preserve continuous memory access for efficient GPU implementation. Extensive evaluation and impressive results. This paper demonstrates the power of the proposed sparse convolution on different network architectures, outperforming representative unstructured sparse methods. Weaknesses: The proposed method can achieve practical speedup on V100 GPU. It would be better to verify on more GPU types. The writing should be improved. There are some typos: Common issue: there should be a space before left bracket. Line34: Missing citation number. Line270: Fig5 -> Fig 5. Questions If I understand correctly, the hyperparameter V should be an integer. In table 1, why V is 44.48 (32.02) in the second last (last) row? The proposed OVW sparsity pattern is faster than unstructured sparsity. It would be better to include the comparison with unstructured sparsity in Figure 5. Limitations Yes.
NIPS
Title Accelerating Sparse Convolution with Column Vector-Wise Sparsity Abstract Weight sparsity is a promising approach to reducing the model size and computation cost of convolutional neural networks (CNNs). Nevertheless, non-zero weights often distribute randomly in sparse CNN models, introducing enormous difficulty in obtaining actual speedup on common hardware (e.g., GPU) over their dense counterparts. Existing acceleration solutions either require hardware modifications for irregular memory access support or rely on a partially structured sparsity pattern. Neither of these methods is capable of achieving fruitful speedup on convolution layers. In this work, we propose an algorithm-software co-designed sparse convolution based on a novel out-vector-wise (OVW) sparse pattern. Building on the insight that vertical vector integrity can preserve continuous memory access in IM2COL, the OVW pattern treats a V × 1 vector as unit. To reduce the error caused by sparsity, we propose an equivalent transformation process, i.e., clustering-based channel permutation, to gather similar rows together. Experimental evaluations demonstrate that our method achieves a 1.7× and 3.2× speedup over the SOTA solution and the dense convolution of ResNet50 on NVIDIA V100 at 75% sparsity, respectively, with only negligible accuracy loss. Moreover, compared to the SOTA solution that achieves speedups only on data with 60% sparsity or more, our method begins to obtain speedups on data with only 10% sparsity. 1 Introduction Recently, convolutional neural networks (CNNs) have yielded astonishing results in many important domains such as vision [8], and language [18]. With CNN algorithm developing rapidly, CNN models’ storage and computing overhead grow exponentially. To significantly reduce both the computations and memory access, weight sparsity has been adopted as a promising approach to improve hardware efficiency. Despite the success in reducing computations and data access, unconstrained, fine-grained sparsity fails to bring practical speedups on common GPUs. This is because unstructured sparsity generally induces tremendous access conflicts and load unbalances, which lowers GPU’s performance. For example, on NVIDIA V100, the sparse matrix multiplication performs not faster than the dense matrix multiplication until the sparsity ratio is over 95% [17, 3]. Unfortunately, existing solutions either require hardware modifications or only partially address the problem by being constrained with structured, coarse-grained sparsity, resulting in high accuracy loss. The former is to leverage the sparse matrix-matrix multiplication (e.g., SPMM) operation ∗Corresponding author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). on GPU. While directly applying SPMM on sparse CNNs can run even slower than dense CNNs [17],productive SPMM acceleration solutions[21, 14] often require their unique need for dedicated hardware support to overcome discontinuous memory access, which is impractical. The latter is to leverage the general matrix-matrix multiplication (e.g., GEMM) operation on GPU. Recent works focus on structured sparsity with different sparse patterns to gain speedup benefits from weight sparsity. Block sparsity[4] manages to restore the spatial locality of matrices to a large extent, at the cost of a strict restriction on the non-zero Balanced sparsity[2, 19, 15], newly introduced on NVIDIA A100 GPU[14], however, lacks flexibility in choosing model sparsity rate that only exact 50% sparsity ratio could be deployed on this dedicated hardware. These efforts achieve some palpable acceleration compared to dense GEMM operation, but they all struggle to attain similar results on convolution layers which have proven to be a greater challenge. To tackle these problems, here we present a novel sparse convolution acceleration algorithm featured with column-wise sparsity and implicit matrix multiplication. Specifically, the proposed column-wise sparsity is dubbed the out-vector-wise (OVW) sparse pattern since the pattern sparsifies a matrix by treating a V×1 vector as an entirety, as shown in Figure 1. During convolution, the OVW pattern can hold both strong memory consistency and high data reuse rates of input matrices using implicit matrix multiplication. Moreover, we propose to employ channel permutation and row clustering to improve the accuracy of OVW sparse pattern-based CNNs. Besides, a GPU kernel is carefully designed to ensure that our OVW sparse pattern is supported by common GPUs. With these efforts, our algorithm predominantly outperforms other sparse convolution acceleration algorithms on various CNN models. More importantly, our algorithm can achieve the acceleration of convolutions even with a very low weight sparsity ratio, e.g., 10%. Instead, prior arts can only work fine when the weight sparsity ratio is over 60%. The main contributions of this paper are listed as follows: • We propose a vector-based sparsity pattern, i.e., the OVW pattern to balance inference accuracy loss and computation efficiency in a hardware-friendly manner. • We implement a new GPU convolution kernel to support the OVW pattern. The kernel utilizes the technique of extracting filter location information which can further reduce inference runtime. • We propose a heuristic clustering method to obtain an appropriate channel permutation for reducing accuracy drop during weight pruning. This channel permutation operation is conducted offline, which does not affect inference time. • Our GPU kernel can accelerate convolution at a wide range of model sparsity rates. With few accuracy loss, the kernel can speed up ResNet50 by 1.7× and 3.2×, respectively over the SOTA solution and the dense cuDNN convolution on NVIDIA V100 GPU at 75% sparsity level. 2 Related work 2.1 Software-only Acceleration For Sparse CNN Model Weight pruning has been a popular technique for efficient CNN inference. Early studies[7, 6] show that removing a large proportion of unimportant connections in CNN models does not necessarily lead to inference accuracy impairment. Reducing parameters helps exploit redundancy in CNN models, which requires fewer computations and data accesses. However, for CNN inference, weight pruned sparse CNNs usually perform worse than dense counterparts, unless the CNN sparsity ratios are substantial, i.e., very sparse CNNs. To address this issue, methods other than unstructured sparsity are exploited. Researchers exploit various constraints on sparsity patterns in exchange for computation efficiency. A primary domain of this region is filter pruning, where parameters of an entire filter are pruned or kept as a whole. However, this direct modification of channel size suffers a sharp accuracy drop [10, 11, 16]. Moderate sparsity patterns are also examined, such as block sparsity[17], which is proposed to elevate the spatial locality of sparse matrices. But this feature can achieve speedup only when sparsity ratios are larger than 70%. Tile-wise sparsity[5] endows weight patterns with more flexibility. Compared to previous methods, balanced sparsity[15, 19] is more feasible with recent support from NVIDIA A100 GPU which directly optimizes 2:4 balanced sparsity. Recent work Shfl_BW [9] uses matrix transformation to utilize block sparsity’s computation efficiency while removing some of its constraints. In this way, the threshold of weight sparsity ratio that enables acceleration is reduced from 70% to 60%. Different from prior works which only work for very sparse matrices, in this paper, our algorithm can achieve speedup when the sparsity ratio is only 10%. 2.2 GEMM Based Convolution GEMM has been adopted widely to perform convolution and it performs significantly better than other convolution methods such as FFT, and Winograd on modern commercial hardware accelerators such as TPUs and GPUs. The GEMM-based algorithms could be further divided into two types: explicit matrix multiplication and implicit matrix multiplication. Explicit matrix multiplication uses IM2COL to adapt inputs for GEMM. IM2COL is an IO-intensive operation, which brings in significant workload other than computation cost[1]. Implicit matrix multiplication merges these operations for more efficient memory accesses. It updates pointers of feature map in shared memory and performs tile-based matrix multiplication simultaneously. On NVIDIA V100 GPUs, explicit GEMM convolutions consume on average 120%, 126%, and 142% in time compared to implicit GEMM-based convolution on convolution layers of Alexnet, Resnet and Googlenet [20]. Yet, few studies have investigated the sparse convolution with implicit GEMM. Performing sparse convolution by GEMM is always through explicit matrix multiplication rather than implicit matrix multiplication. This is because of the IM2COL operation which is extremely difficult if not impossible for sparse matrix multiplication, as sparse matrices are compactly compressed and irregularly stored. As a result, implicit GEMM who does not have to suffer from the costly IM2COL operation has the potential to achieve higher efficiency for sprase convolution. In this paper, we investigate the implicit GEMM-based sparse convolution to leverage the high-performance GEMMs on existing hardware. 3 Accelerating Sparse Convolution In this section, we introduce our proposed sparse convolution algorithm, including the OVW pattern of sparsity for the proposed sparse convolution, its advantage in convolution computation and our detailed implementation on GPU. 3.1 The OVW Pattern The OVW pattern belongs to the vector-wise(VW) pattern which is one of the three different pattern categories of sparsity in matrix. As shown in Fig 1, the first sparsity pattern is the element-wise(EW) pattern, corresponding to unstructured pruning, which evaluates each parameter individually. Having imposed no constraint on pruning, this pattern succeeded at model flexibility but struggled at actual acceleration due to its irregular memory accesses. The second sparsity pattern is the VW pattern, which can be further divided into the inter-vector-wise(IVW) and the OVW pattern. They both treat a V×1 vector as an entirety while the IVW pattern prunes a certain proportion of weights inside each vector and the OVW pattern focuses on the entire vector of weights. The third pattern is the block-wise(BW) pattern, and its minimum pruning granularity is a V×V block. This pattern has the highest computation efficiency, but its inference accuracy loss is high as well. In this work, we use the OVW pattern since it shares the advantages of the VW pattern, which balances computation efficiency of BW and network accuracy of EW. The Shfl_BW pattern is actually a variant of this pattern, who uses an extra channel reordering procedure to gather block-wise pattern utilities. 3.2 The OVW Pattern’s Advantage in Convolution Computation The biggest advantage of the OVW pattern is that it fits the way an efficient dense warp-level GEMM instruction fetches input data. This instruction is the key contributor to most of the sparse matrix acceleration methods. The reasons are as followed. As shown in Fig 2, the OVW pattern-based sparse convolution can be broken down into multiple dense matrix multiplications of smaller sizes. During the loading process of a dense convolution procedure, a column of filter data loaded into shared memory shares a specific position on the filter map. In the meantime, a continuous block in the feature map is loaded accordingly to prepare convolution. Several columns fetched from the filters together form the left input matrix of the block matrix multiplication and their corresponding feature data blocks form the right input matrix. Noticing that this forming process of input matrix does not require the loaded filter columns to be continuous, meaning that efficient dense operations can also be performed by grouping some unrelated columns. Based on this observation, we could take in multiple columns of irrelevant column indices from the OVW pattern sparse matrix and handle them in the same way as the dense GEMM operation. This similarity between our convolution algorithm and the implicit GEMM convolution guarantees us similar overall computation efficiency. What’s more, other sparsity such as the Shfl_BW pattern who is actually a variant of this pattern, uses an extra channel reordering procedure to gather block-wise pattern utilities. The OVW pattern, however, could be directly used in our convolution algorithm which denotes a higher acceleration potential. Compared to N:M sparsity,one of the IVW pattern, our approach does not need specialized hardware supports and it is much more elastic in selecting the sparsity ratio of each layer. Besides, the IVW pattern still faces the memory-bound issue, because the amount of redundant data that needs to be loaded into shared memory each time is equal to its sparsity ratio. 3.3 GPU Sparse Kernel Implementation As shown in Algorithm 1, our convolution kernel implementation contains three steps. The first step is to get the corresponding feature pointer offsets through original filter structure information recovery. These parts of calculation are done by the function Cal_Thread_Offset. The second step is to load data from both input matrices into shared memory. Some threads use the function load_column to load a column vector of length TM from filters to shared memory with DY threads. Threads then use the function load_row to load a row vector from the feature map in the same way. The third step calls warp matrix multiplication operators. When calculating matrix AM×K × BK×N = CM×N , a three dimension parallelism(DX,DY,DZ) thread is employed. (DX,DY,DZ) threads loop along (M,N,K) individually. Several threads together use the function Warp_MMA to multiply the loaded matrices and write back after results accumulation is finished. Each thread computes a tile matrix multiplication of the size (TM, TN, TK). The procedure of calculating the exact pointer offset of the feature map for corresponding filter columns contains two steps. During a convolution computation, the corresponding location of aij × bjk is not obvious. Hence after loading aij into shared memory, firstly, the GPU kernel has to fetch the column indices of aij in the original filter. The column indices are then used to recover the exact position of this column in the filter map. Subsequently, the location offset of the corresponding data in the feature map is calculated, after which bjk can be finally located in the feature map. Algorithm 1: Sparse convolution computation Data: row_idx[], filter[], input[] Result: output[] 1 Shared memory A[TM ][TK], B[TK][TN ], C[TM ][TN ]; 2 for Thread idx=1 to DX, idy=1 to DY, idz=1 to DZ do 3 offset = Cal_Thread_Offset(row_idx[], idx, idy, idz); 4 if idx < TK then 5 Load_column(A, filter[idz][idx], TM , idy); 6 end 7 if idy < TK then 8 Load_row(B, input[offset], TN , idx); 9 end 10 Syncthreads(); 11 Warp_MMA(A, B, idx, idy); 12 Accumulate_Results(C); 13 Store(output, C); 14 end 15 Return output; If only the column indices of sparse matrices are stored, their location information has to be recovered each time before the corresponding activation is loaded. Like the location offset in the feature map, the corresponding data location in the filter map could be prepared in advance. Because it is a constant for each thread during the whole process. Extra storage occupation of this technique is two extra dimension indices data array of filter map, which takes merely 3% total storage of a compressed model with vector length=64, and 6% with vector length=32, in exchange for 10% run time reduction of Resnet50’s convolution layers on average. Considering that a sparse model is already highly compressed, this additional model redundancy is totally acceptable. 4 Pruning Algorithm In this section, we introduce our pruning algorithm for the OVW pattern, including the channel permutation technique and our method to acquire a desired permutation order for it. 4.1 Channel Permutation Our pruning method can be divided into two steps: shuffling the filter matrix rows in each layer and applying vector-wise pruning. Here we will explain why filter permutation will do no harm to the network inference. Matrix multiplication only swaps the order in the output dimension and does not change the actual computation. Permuted operation results can be recovered through a reversed permutation of the operand output. As we only permute the output channel of each layer, the permuted order of the current layer will be absorbed by the input channel dimension of the next GEMM-based layer(convolution or linear). Fig 3 shows one iteration for channel permutation between layerk and layerk+1. As we have reordered the output channel of layerk, the activationk is changed to the same order, but after we permute the input channel of layerk+1, activationk+1 is restored. More weights value could be saved after permutation. The same operation is then repeated on layerk+1 and so on until every GEMM layer in the network is permuted. This permutation transfer procedure allows us to choose an appropriate permutation row order for every filter without altering the output of the network. The rest of the layers such as the pooling layers and the activation layers involve no modification along channel dimension thus are not affected by this process. The BN layers and the bias added at the end of convolution layers and linear layers do not produce any new permutation order, but they have to permute according to the permutation passing through. 4.2 Row Clustering Algorithm 2: Row clustering Data: The original weight W , number of clusters k, number of selected column m. Result: The reordered weight RW . 1 RW = empty; 2 while W is not empty do 3 Sort the columns of W by column variance; 4 Build SampleW by selecting columns with the top-m largest variances; 5 Get the k clustered groups G = Balanced_kmeans(SampleW , k); 6 Select the group g with maximum sum; 7 Append g to RW ; 8 Remove g from W ; 9 end 10 Return RW ; We use a row clustering method to obtain an appropriate permutation order. A heuristic indicator evaluating the quality of a permutation is the sum of absolute weight value being pruned, assuming that the method that the preservation of more important weights corresponds to less inference accuracy loss. An obvious route is that the weight rows with shorter distances are assigned to the same group as much as possible. Shfl_BW chooses the kmeans method for clustering, but after careful experiment, we found that the kmeans method does not suit this issue well. For starters, the number of elements in each group is set to be a fixed value (vector length), and kmeans requires additional operations to meet this demand. Also, the data dimension(input channel multiplies filter height and width) is too large, while the amount of data and groups is relatively small. Kmeans falls in local minima easily and the output cluster is extremely unstable. We introduced the balanced kmeans[13] to solve it and modified it to palliate both symptoms mentioned above. Algorithm 2 shows the key steps of our algorithm. First, we construct a characteristic matrix by assembling rows with the highest variance, then cluster rows of this matrix to alleviate excessive dimension. We utilize balanced kmeans to get equal size clusters. In each iteration of balanced kmeans, instead of assigning each data vector to its nearest cluster center as the origin kmeans algorithm, a distance matrix between all the vectors and the current cluster centers is formed. We minimize the sum of distance while each cluster contains the same amount of data vector. This minimization problem can be converted to bipartite matching and we employ the Kuhn-Munkres algorithm to solve it. Secondly, for each clustering result, we only adopt the most important group under this feature to increase operation stability. This group is removed from the original matrix and then the feature matrix is reconstructed and clustered. Repeat the above steps until all data grouping completes, the permuted matrix and its permuting order is obtained. 5 Evaluation 5.1 Model Accuracy We evaluate our method on several popular CNN models on NVIDIA V100 GPU. We only calculate the speedup of the convolution layers in the following results. Table 1 shows the accuracy of our method compared to unstructured sparsity where V stands for vector length. “OVW permuted” shows a better accuracy over “OVW non-permuted” on all CNN models. The upper bound of V in our kernel implementation is 64 and we tend to make it as large as possible to maximize shared memory usage. However, group convolution has no filter data reusage for vector length larger than its group size. Similarly, V is set to 1 in depthwise convolution layers. Besides that, the first convolution layer of SqueezeNet has only 96 output channels. If the vector length is 64, the second tile in the output channel dimension only processes 32 rows which is only half loaded. In this case, vector length is set to 32 to maximize computation resource utilization. The V selection strategy here is to maximize V while utilizing shared memory resources as much as possible. All the results in this table use the same fine-tuning process. We fine-tune each network for 40 epochs after pruning with the same learning rate of 0.0008. Also, each layer can hold different sparsity ratio credit to our acceleration for convolution at low sparsity ratio. 5.2 Convolution Kernel Speedup As shown in Fig 4, we evaluate the speedup of our method on three popular CNN models. We use the cuDNN convolution operator as the dense baseline. The first three graphs in Fig 4 represent three typical convolution shapes in CNN models: small channel size with a large feature map, medium channel size with a medium feature map, and large channel size with a small feature map. Our kernel can accelerate all these types of convolution layers while exceeds at the twelfth convolution layer of Resnet50 which is the most used kind of convolution layer 4.8× and 3.87× over cuDNN on V100 at 80% sparsity, vector lengths 64 and 32. 5.3 Comparing Different Sparsity Patterns We replicate two vector-level sparsity pattern, balanced sparsity(NVIDIA 2:4)[15] and Shfl_BW [9] for comparison. Other vector-level sparsity patterns such as Tile-wise[5] are slower than the former two patterns. Also, these patterns lack implementation for convolution. Table 2 shows results of Resnet50 on ImageNet directly copied from Shfl_BW paper, where an expensive method Grow and Prune[12] is used to recover its accuracy. Grow and Prune is a sparsity pattern independent method. We fine-tune our pretrained Resnet50 model for 20 epochs with the learning rate of 0.001. We lower the sparsity of our network to 70% where our method demonstrates 73.35% top1 accuracy and 2.79× speedup. Our method exhibits an obviously better speed-accuracy tradeoff compared to the Shfl_BW. The OVW pattern also achieves a better speedup compared to balanced sparsity while recovering the full accuracy of the original model. To ensure fair comparison, we reproduce these results under the same setting on Resnet50 and Cifar100, as shown in Fig 5. We fine-tune each network for 40 epochs after pruning from pretrained dense models with the same learning rate of 0.0008. The OVW pattern dominates the speed-accuracy trade-off in vector-level sparsity. 6 Conclusion Accelerating sparse convolution poses a greater challenge than accelerating sparse matrix multiplication. In this work, we propose a novel sparsity pattern, the OVW pattern to facilitate the sparse convolution acceleration under intact accuracy. Although, limitations exist that our method relies heavily on the hardware supports for the implicit GEMM convolution algorithm. The performance of our base dense kernel against the commodity unpublished ones is unstable. Our method does not acquire the same amount of acceleration rate on matrix multiplication and the acceleration rate of our method is subject to filter shape. Its performance also degrades in specialized convolution layers where data reusage opportunity is limited. In consideration of this, our GPU implementation still largely outperforms all sparse acceleration approaches exceedingly with a sparse pattern of similar flexibility. We hope this work can fill the vacancy of specialized sparse convolution kernel design and our methodology can inspire further research in this domain. Acknowledgement We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research. This work is partially supported by the National Key R&D Program of China(under Grant 2017YFA0700902), the NSF of China(under Grants 61925208, 61732020, U19B2019), Strategic Priority Research Program of Chinese Academy of Science (XDB32050200), Beijing Academy of Artificial Intelligence (BAAI), CAS Project for Young Scientists in Basic Research(YSBR-029), Youth Innovation Promotion Association CAS and Xplore Prize.
1. What is the focus and contribution of the paper on accelerating neural networks? 2. What are the strengths of the proposed approach, particularly in preserving continuous memory access? 3. What are the weaknesses of the paper, especially regarding its claims and experiments? 4. Do you have any concerns about the hyperparameter V and its impact on inference speed? 5. Are there any limitations to the proposed method, and how do they compare to other approaches such as channel pruning?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes an out-vector-wise (OVW) pattern-based sparse convolution for accelerating neural networks. The proposed OVW pattern can preserve continuous memory access. Moreover, the clustering-based channel permutation method is introduced to reduce error caused by sparsity. The sparse convolution is efficiently implemented using CUDA for GPU acceleration. The experimental results demonstrate the efficiency of the proposed method over unstructured sparsity. Strengths And Weaknesses Strengths: The proposed out-vector-wise (OVW) sparse pattern can preserve continuous memory access which is beneficial to GPU acceleration, while unstructured sparsity involves random memory access which is unfriendly for hardware. Sparse model acceleration is an important problem for mobile devices. The proposed method achieves a better tradeoff between inference speed and accuracy over unstructured sparsity and channel sparsity. The clustering-based channel permutation method can reduce error caused by sparsity. Weaknesses: The paper claims that the OVW pattern can achieve a better tradeoff than channel pruning. please add the experimental comparison to evaluate this claim. The hyperparameter V determines the length of a vector. How does it affect the inference speed? It would be better to present an ablation study of it. Questions please refer to the weaknesses. Limitations Yes
NIPS
Title Synthetic Data Generators -- Sequential and Private Abstract We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps nonefficient). Previous work on synthetic data generators focused on the case that the query class D is finite and obtained sample complexity bounds that scale logarithmically with the size |D|. Here we construct a private synthetic data generator whose sample complexity is independent of the domain size, and we replace finiteness with the assumption that D is privately PAC learnable (a formally weaker task, hence we obtain equivalence between the two tasks). 1 Introduction Generating differentially–private synthetic data [9, 14] is a fundamental task in learning that has won considerable attention in the last few years [22, 34, 23, 16]. Formally, given a class D of distinguishing functions, a fooling algorithm receives as input IID samples from an unknown real-life distribution, preal, and outputs a distribution psyn that is -close to preal w.r.t the Integral Probability Metric ([29]), denoted IPMD: IPMD(p, q) = sup d∈D ∣∣∣∣ Ex∼p[d(x)]− Ex∼q[d(x)] ∣∣∣∣ (1) A DP-SDG is then simply defined to be a differentially private fooling algorithm. A fundamental question is then: Which classes D can be privately fooled? In this paper, we focus on sample complexity bounds and give a first such characterization. We prove that a class D is DP–foolable if and only if it is privately (proper) PAC learnable. As a corollary, we obtain equivalence between several important tasks within private learning such as proper PAC Learning [25], Data Release [14], Sanitization [6] and what we will term here Private Uniform Convergence. Much focus has been given to the task of synthetic data generation. Also, several papers [5, 23, 16, 20, 21] discuss the reduction of private fooling to private PAC learning. In contrast with previous work, we assume an arbitrary large domain. In detail, previous existing bounds normally scale logarithmically with the size of the query class D (or alternatively, depend on the size of the domain). Here we initiate a study of the sample complexity that does not assume that the size of the domain is fixed. Instead, we only assume that the class is privately PAC learnable, and obtain sample complexity 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. bounds that are independent of the cardinality |D|. We note that the existence of a private synthetic data generator entails private proper PAC learning, hence our assumption is a necessary condition for the existence of a DP-SDG. The general approach taken for generating synthetic data (which we also follow here) is to exploit an online setup of a sequential game between a generator that aims to fool a discriminator and a discriminator that attempts to distinguish between real and fake data. The utility and generality of this technical method, in the context of privacy, has been observed in several previous works [22, 32, 20]. However, in the finite case, specific on-line algorithms, such as Multiplicative Weights [20] and Follow-the-Perturbed-Leader [37] are considered. The algorithms are then exploited, in a white-box fashion, that allow easy construction of SDGs. The technical challenge we face in this work is to generalize the above technique in order to allow the use of no-regret algorithms that work over infinite classes. Such algorithms don’t necessarily share the attractive traits of MW and FtPL that allow their exploitation for generating synthetic data. To overcome this, we study here a general framework of sequential SDGs and show how an arbitrary online algorithm can be turned, via a Black-box process, into an SDG which in turn can be privatized. We discuss these challenges in more detail in the full version [10]. Thus, the technical workhorse behind our proof is a learning primitive which is of interest of its own right. We term it here Sequential Synthetic Data Generator (Sequential-SDG). Similar frameworks appeared [20, 37] in the context of private-SDGs but also more broadly in the context of generative learning [19, 27, 18, 17]. We further discuss this deep and important connection between private learning and generative learning in Section 5 In the sequential-SDG setting, we consider a sequential game between a generator (player G) and a discriminator (player D). At every iteration, player G proposes a distribution and player D outputs a discriminating function from a prespecified binary class D. The game stops when player G proposes a distribution that is close in IPMD distance to the true target distribution. As we focus on the statistical limits of the model, we ignore the optimization and computational complexity aspects and we assume that both players are omnipotent in terms of their computational power. We provide here characterization of the classes that can be sequentially fooled (i.e. classes D for which we can construct a sequential SDG) and show that the sequentially foolable classes are exactly Littlestone classes [28, 7]. In turn, we harness sequential SDGs to generate synthetic data together with a private discriminator in order to generate private synthetic data. Because this framework assumes only a private learner, we in some sense show that the sequential setting is a canonical method to generate synthetic data. To summarize this work contains several contributions: We provide the first domain-size independent sample complexity bounds for DP-Fooling, and show an equivalence between private synthetic data generation and private learning. Second, we introduce and characterize a new class of SDGs and demonstrate their utility in the construction of private synthetic data. 2 Prelimineries In this section we recall standard definitions and notions in differential privacy and learning (a more extensive background is also given in the full version [10]). Throughout the paper we will study classes D of boolean functions defined on a domain X . However, we will often use a dual point of view where we think of X as the class of functions and on D as the domain. Therefore, in order to avoid confusion, in this section we let W denote the domain and H ⊆ {0, 1}W to denote the functions class. 2.1 Differential Privacy and Private Learning Differential Privacy [13, 12] is a statistical formalism which aims at capturing algorithmic privacy. It concerns with problems whose input contains databases with private records and it enables to design algorithms that are formally guaranteed to protect the private information. For more background see the surveys [15, 35]. The formal definition is as follows: letWm denote the input space. An input instance Ω ∈ Wm is called a database, and two databases Ω′,Ω′′ ∈ Wm are called neighbours if there exists a single i ≤ m such that Ω′i 6= Ω′′i . Let α, β > 0 be the privacy parameters, a randomized algorithm M : Wm → Σ is called (α, β)-differentially private if for every two neighbouring Ω′,Ω′′ ∈ Wm and for every event E ⊆ Σ: Pr [ M(Ω′) ∈ E ] ≤ eα Pr [ M(Ω′′) ∈ E ] + β. An algorithm M : ∪∞m=1Wm → Y is called differentially private if for every m its restriction toWm is (α(m), β(m))-differentially private, where α(m) = O(1) and β(m) is negligible1. Concretely, we will think of α(m) as a small constant (say, 0.1) and β(m) = O(m− logm). Private Learning. We next overview the notion of Differentially private learning algorithms [25]. In this context the input database is the training set of the algorithm. Given a hypothesis classH over a domain W , we say thatH ⊆ {0, 1}W is privately PAC learnable if it can be learned by a differentially private algorithm. That is, if there is a differentially private algorithm M and a sample complexity bound m( , δ) = poly(1/ , 1/δ) such that for every , δ > 0 and every distribution P over W × {0, 1}, if M receives an independent sample S ∼ Pm then it outputs an hypothesis hS such that with probability at least 1− δ: LP(hS) ≤ min h∈H LP(h) + , where LP(h) = E(w,y)∼P [ 1[h(w) 6= y] ] . If M is proper, namely hS ∈ H for every input sample S, thenH is said to be Privately Agnostically and Properly PAC learnable (PAP-PAC-learnable). In some of our proofs it will be convenient to consider private learning algorithms whose privacy parameter α satisfies α ≤ 1 (rather than α = O(1) as in the definition of private algorithms). This can be done without loss of generality due to privacy amplification theorems (see, for example (similar, for example [35] (Definition 8.2) and references within (see also the full version [10] for further details). Sanitization. The notion of sanitization has been introduced by (author?) [9] and further studied in [6]. LetH ⊆ {0, 1}W be a class of functions. An ( , δ, α, β,m)-sanitizer forH is an (α, β)-private algorithm M that receives as an input a sample S ∈ Wm and outputs a function Est : H → [0, 1] such that with probability at least 1− δ, (∀h ∈ H) : ∣∣∣Est(h)− |{w ∈ S : h(w) = 1}||S| ∣∣∣ ≤ . We say that H is sanitizable if there exists an algorithm M and a bound m( , δ) = poly(1/ , 1/δ) such that for every , δ > 0, the restriction of M to samples of any size m ≥ m( , δ) is an ( , δ, α, β,m)-sanitizer forH with α = α(m) = O(1) and β = β(m) negligible. Private Uniform Convergence. A basic concept in Statistical Learning Theory is the notion of uniform convergence. In a nutshell, a class of hypothesesH satisfies the uniform convergence property if for any unknown distribution P over examples, one can uniformly estimate the expected losses of all hypotheses in H given a large enough sample from P . Uniform convergence and statistical learning are closely related. For example, the Fundamental Theorem of PAC Learning asserts that they are equivalent for binary-classification [33]. This notion extends to the setting of private learning: a class H satisfies the Private Uniform Convergence property if there exists a differentially private algorithm M and a sample complexity bound m( , δ) = poly(1/ , 1/δ) such that for every distribution P overW × {0, 1} the following holds: if M is given an input sample S of size at least m( , δ) which is drawn independently from P, then it outputs an estimator L̂ : H → [0, 1] such that with probability at least (1− δ) it holds that (∀h ∈ H) : ∣∣L̂(h)− LP(h)∣∣ ≤ . Note that without the privacy restriction, the estimator L̂(h) = LS(h) := |{(wi, yi) ∈ S : h(wi) 6= yi}| |S| satisfies the requirement for m = Õ(d/ 2), where d is the VC-dimension ofH; this follows by the celebrated VC-Theorem [36, 33]. 1I.e. β(m) = o(m−k) for every k > 0. 3 Problem Setup We assume a domain X and we let D ⊆ {0, 1}X be a class of functions over X . The class D is referred to as the discriminating functions class and its members d ∈ D are called discriminating functions or distinguishers. We let ∆(X ) denote the space of distributions over X . Given two distributions p, q ∈ ∆(X ), let IPMD(p, q) denote the IPM distance between p and q as in Eq. (1). It will be convenient to assume thatD is symmetric, i.e. that whenever d ∈ D then also its complement, 1− d ∈ D. Assuming that D is symmetric will not lose generality and will help simplify notations. We will also use the following shorthand: given a distribution p and a distinguisher d we will often write p(d) := E x∼p [d(x)]. Under this assumption and notation we can remove the absolute value from the definition of IPM: IPMD(p, q) = sup d∈D (p(d)− q(d)) . (2) 3.1 Synthetic Data Generators A synthetic data generator (SDG), without additional constraints, is defined as follows Definition 1 (SDG). An SDG, or a fooling algorithm, for D with sample complexity m( , δ) is an algorithm M that receives as input a sample S of points from X and parameters , δ such that the following holds: for every , δ > 0 and every target distribution preal, if S is an independent sample of size at least m( , δ) from preal then Pr [ IPMD(psyn, preal) < ] ≥ 1− δ, where psyn := M(S) is the distribution outputted by M , and the probability is taken over S ∼ (preal) m as well as over the randomness of M . We will say that a class is foolable if it can be fooled by an SDG algorithm whose sample complexity is poly(1 , 1 δ ). Foolability, without further constraints, comes with the following characterization which is an immediate corollary (or rather a reformulation) of the celebrated VC Theorem ([36]). Denote by Memp an algorithm that receives a sample S and returns Memp(S) := pS , the empirical distribution over S. Observation 1 ([36]). The following statements are equivalent for a class D ⊆ {0, 1}X : 1. D is PAC–learnable. 2. D is foolable. 3. D satisfies the uniform convergence property. 4. D has a finite VC-dimension. 5. Memp is a fooling algorithm for D with sample complexity m = O( log 1/δ 2 ). Observation 1 shows that foolability is equivalent to PAC-learnability (and in turn to finite VC dimension). We will later see analogous results for DP–Foolability (which is equivalent to differentially private PAC learnability) and Sequential–Foolability (which is equivalent to online learnability). We now discuss the two fundamental models that are the focus of this work – DP–Foolability and Sequential–Foolability. 3.2 DP–Synthetic Data Generators We next introduce the notion of a DP–synthetic data generator and DP–Foolability. As discussed, DP-SDGs have been the focus of study of several papers [9, 14, 22, 34, 23, 16]. Definition 2 (DP-SDG). A DP-SDG, or a DP-fooling algorithm M for a class D is an algorithm that receives as an input a finite sample S and two parameters ( , δ) and satisfies: • Differential Privacy. For every m, the restriction of M to input samples S of size m is (α(m), β(m))-differentially private, where α(m) = O(1) and β(m) is negligible. • Fooling. M fools D: there exists a sample complexity bound m = m( , δ) such that for every target distribution preal if S is a sample of at least m examples from preal then IPMD(psyn, preal) ≤ with probability at least 1− δ, where psyn is the output of M on the input sample S. We will say in short that a class D is DP– Foolable if there exists a DP-SDG for the class D with sample complexity m = poly(1/ , 1/δ). 3.3 Sequential–Synthetic Data Generators We now describe the second model of foolability which, as discussed, is the technical engine behind our proof of equivalence between DP-foolability and DP-learning. Sequential-SDGs A Sequential-SDG can be thought of as a sequential game between two players called the generator (denoted by G) and the discriminator (denoted by D). At the beginning of the game, the discriminator D receives the target distribution which is denoted by preal. The goal of the generator G is to find a distribution p such that p and preal are -indistinguishable with respect to some prespecified discriminating class D and an error parameter > 0, i.e. IPMD(p, preal) ≤ . We note that both players know D and . The game proceeds in rounds, where in each round t the generator G submits to the discriminator a candidate distribution pt and the discriminator replies according to the following rule: if IPMD(pt, preal) ≤ then the discriminator replies “WIN” and the game terminates. Else, the discriminator picks dt ∈ D such that |preal(dt)− pt(dt)| > , and sends dt to the generator along with a bit which indicates whether pt(dt) > preal(dt) or pt(dt) < preal(dt). Equivalently, instead of transmitting an extra bit, we assume that the discriminator always sends dt ∈ D ∪ (1−D) s.t. preal(dt)− pt(dt) > . (3) Definition 3 (Sequential–Foolability). Let > 0 and let D be a discriminating class. 1. D is called -Sequential–Foolable if there exists a generator G and a bound T = T ( ) such that G wins any discriminator D with any target distribution preal after at most T rounds. 2. The round complexity of Sequential–Fooling D is defined as the minimal upper bound T ( ) on the number of rounds that suffice to –Fool D. 3. D is called Sequential–Foolable if it is -Sequential foolable for every > 0 with T ( ) = poly(1/ ). In the next section we will see that if D is -Sequential–Foolabe for some fixed < 1/2 then it is Sequential–Foolable with round complexity T ( ) = O(1/ 2). 4 Results Our main result characterizes DP–Foolability in terms of basic notions from differential privacy and PAC learning. Theorem 1 (Characterization of DP–Fooling). The following statements are equivalent for a class D ⊆ {0, 1}X : 1. D is privately and properly learnable in the agnostic PAC setting. 2. D is DP–Foolable. 3. D is sanitizable. 4. D satisfies the private uniform convergence property. Theorem 1 shows a qualitative equivalence between the relevant four notions, quantitative bounds on the entailed sample complexity are provided in the full version [10]. The implication Item 3 =⇒ Item 1 was known prior to this work and was proven in [6] (albeit the pure case). The equivalence among Items 2 to 4 is natural and expected. Indeed, each of them expresses the existence of a private algorithm that publishes, privately, certain estimates of all functions in D. The fact that Item 1 implies the other three items is perhaps more surprising, and the main contribution of this work, and we show that Item 1 implies Item 2. Our proof of that exploits the Sequential framework. In a nutshell, we observe that a class that is both sequentially foolable and privately pac learnable is also DP-foolable: this result follows by constructing a sequential SDG that with a private discriminator, that is assumed to exists, combined with standard compositional and preprocessing arguments regarding the privacy of the generators output. Thus to prove the implication we only need to show that private PAC learning implies sequential foolability. This result follows from Corollary 2 that provides characterization of sequential foolable classes as well as a recent result by (author?) [1] that shows that private PAC learnable classes have finite Littlestone dimension. See the full version [10] for a complete proof. Private learnability versus private uniform convergence. The equivalence Item 1 ⇐⇒ Item 4 is between private learning and private uniform convergence. The non-private analogue of this equivalence is a cornerstone in statistical learning; it reduces the statistical challenge of minimizing an unknown population loss to an optimization problem of minimizing a known empirical estimate. In particular, it yields the celebrated Empirical Risk Minimization (ERM) principle: “Output h ∈ H that minimizes the empirical loss”. We therefore highlight this equivalence in the following corollary: Corollary 1 (Private proper learning = private uniform convergence). Let H ⊆ {0, 1}X . Then H is privately and properly PAC learnable if and only ifH satisfies the private uniform convergence property. Sequential–SDGs We next describe our characterization of Sequential-SDGs. As discussed, this characterization is the technical heart behind the equivalence between private PAC learning and DP-foolability. Nevertheless we believe that it may be of interest of its own right. We thus provide quantitative upper and lower bounds on the round complexity of Sequential-SDGs in terms of the Littlestone dimension (see [7] or the full version [10] for the exact definition). Theorem 2 (Quantitative round-complexity bounds). Let D be a discriminating class with dual Littlestone dimension `∗ and let T ( ) denote the round complexity of Sequential–Fooling D. Then, 1. T ( ) = O ( `∗ 2 log `∗ ) for every . 2. T ( ) ≥ ` ∗ 2 for every < 1 2 . It would be interesting to close the gap between the two bounds in terms of > 0, and we leave it for future work. To prove Item 1 we construct a generator with winning strategy which we outline in the full version [10]. A complete proof of Theorem 2 appears in the full version [10]. As a corollary we get the following characterization of Sequential–Foolability: Corollary 2 (Characterization of Sequential–Foolability). The following are equivalent for D ⊆ {0, 1}X : 1. D is Sequential–Foolable. 2. D is -Sequential–Foolable for some < 1/2. 3. D has a finite dual Littlestone dimension. 4. D has a finite Littlestone dimension. Corollary 2 follows directly from Theorem 2 (which gives the equivalences 1 ⇐⇒ 2 ⇐⇒ 3) and from [8] (which gives the equivalence 3 ⇐⇒ 4, see the full version [10] for further detail). Tightness of = 12 . The implication Item 2 =⇒ Item 1 can be seen as a boosting result: i.e. “weak” foolability for some fixed < 1/2 implies “strong” foolability for every . The following example demonstrates that the dependence on in Item 2 can not be improved beyond 12 : let X be the unit circle in R2, and let D consist of all arcs whose length is exactly half of the circumference. It is easy to verify that the uniform distribution µ over X satisfies IPMD(µ, preal) ≤ 12 for any target distribution preal (since µ(d) = 12 for all d ∈ D). Therefore D is ( = 1 2 )-Sequential–Foolable with round complexity T ( 12 ) = 1. On the other hand, D has an infinite Littlestone dimension and therefore is not Sequential–Foolable. Sequential-SDGs versus DP-SDGs So far we have introduced and characterized two formal setups for synthetic data generation. It is therefore natural to compare and seek connections between these two frameworks. We first note that the DP setting may only be more restrictive than the Sequential setting: Corollary 3 (DP–Foolability implies Sequential–Foolability). Let D be a class that is DP–Foolable. Then D has finite Littlestone dimension and in particular is Sequential–Foolable. Corollary 3 follows from Theorem 1: indeed, the latter yields that DP–Foolability is equivalent to Private agnostic proper -PAC learnability (PAP-PAC), and by [1] PAP-PAC learnability implies a finite Littlestone dimension which by Corollary 2 implies Sequential–Foolability. Towards a converse of Corollary 3. By the above it follows that the family of classes D that can be fooled by a DP algorithm is contained in the family of all Sequential–Foolable classes; specifically, those which admit a Sequential-SDG with a differentially private discriminator. We do not know whether the converse holds; i.e. whether “Sequential–Foolability =⇒ DP– Foolability”. Nevertheless, the implication “PAP-PAC learnability =⇒ DP–Foolability” (Theorem 1) can be regarded as an intermediate step towards this converse. Indeed, as discussed above, PAP-PAC learnablity implies Sequential–Foolablility. It is therefore natural to consider the following question, which is equivalent2 to the converse of Corollary 3: Question 1. Let D be a class that has finite Littlestone dimension. Is D properly and privately learnable in the agnostic PAC setting? A weaker form of this question – Whether every Littlestone class is privately PAC Learnable? – was posed by [1] as an open question (and was recently resolved in [11]). 5 Discussion In this work we develop a theory for two types of constrained-SDG, sequential and private. Let us now discuss SDGs more generally, and we broadly want to consider algorithms that observe data, sampled from some real-life distribution, and in turn generate new synthetic examples that resemble real-life samples, without any a-priori constraints. For example, consider an algorithm that receives as input some tunes from a specific music genre (e.g. jazz, rock, pop) and then outputs a new tune. Recently, there has been a remarkable breakthrough in the the construction of such SDGs with the introduction of the algorithmic frameworks of Generative Adversarial Networks (GANs) [18, 17], as well as Variational AutoEncoders (VAE) [26, 31]. In turn, the use of SDGs has seen many potential applications [24, 30, 38]. Here we follow a common interpretation of SDGs as IPM minimizers [2, 4]. However, it was also observed [2, 3] that there is a critical gap between the task of generating new synthetic data (such as new tunes) and the IPM minimization problem: In detail, Observation 1 shows that the IPM framework allows certain “bad" solutions that memorize. Specifically, let S be a sufficiently large independent sample from the target distribution and consider the empirical distribution as a candidate solution to the IPM minimization problem. Then, with high probability, the IPM distance between the empirical and the target distribution vanishes as |S| grows. To illustrate the problem, imagine that our goal is to generate new jazz tunes. Let us consider the discriminating class of all human music experts. The solution suggested above uses the empirical 2I.e. an affirmative answer to Question 1 is equivalent to the converse of Corollary 3. distribution and simply “generates" a tune from the training set3. This clearly misses the goal of generating new and original tunes but the IPM distance minimization framework does not discard this solution. For this reason we often invoke further restrictions on the SDG and consider constrainedSDGs. For example, [4] suggests to restrict the class of possible outputs psyn and shows that, under certain assumptions on the distribution preal, the right choice of class D leads to learning the true underlying distribution (in Wasserstein distance). In this work we explored two other types of constrained-SDGs, DP–SDGs and Sequential–SDGs, and we characterized the foolable classes in a distribution independent model, i.e. without making assumptions on the distribution preal. One motivation for studying these models, as well as the interest in a distribution independent setting, is the following underlying question: The output of Synthetic Data Generators should be new examples. But in what sense we require the output to be novel or distinct from the training set? How and in what sense we should avoid copying the training data or even outputting a memorized version of it? To answer such questions is of practical importance. For example, consider a company that wishes to automatically generate music or images to be used commercially. One approach could be to train an SDG, and then sell the generated output. What can we say about the output of SDGs in this context? Are the images generated by the SDG original? Are they copying the data? or breaching copyright? In this context, the differentially private setup comes with a very attractive interpretation that provides further motivation to study DP-SDGs, beyond preserving privacy of the dataset. To illustrate our interpretation of differential privacy as a criterion for originality consider the following situation: imagine that Lisa is a learning painter. She has learned to paint by observing samples of painting, produced by a mentor painter Mona. After a learning process, she draws a new painting L. Mona agrees that this new painting is a valid work of art, but Mona claims the result is not an original painting but a mere copy of a painting, say M , produced by Mona. How can Lisa argue that paint L is not a plagiary? The easiest argument would be that she had never observed M . However, this line of defence is not always realistic as she must observe some paintings. Instead, we will argue using the following thought experiment: What if Lisa never observed M? Might she still create L? If we could prove that this is the case, then one could argue similarly that L is not a palgiary. The last argument is captured by the notion of differential privacy. In a nutshell, a randomized algorithm that receives a sequence of data points x̄ as input is differentially private if removing/replacing a single data point in its input, does not affect its output y by much; more accurately, for any event E over the output y that has non-negligible probability on input x̄, then the probability remains non-negligible even after modifying one data point in x̄. The sequential setting also comes with an appealing interpretation in this context. A remarkable property of existing SDGs (e.g. GANs), that potentially reduces the likeliness of memorization, is that the generator’s access to the sample is masked. In more detail, the generator only has restricted access to the training set via feedback from a discriminator that observes real data vs. synthetic data. Thus, potentially, the generator may avoid degenerate solutions that memorize. Nevertheless, even though the generator is not given a direct access to the training data, it could still be that information about this data could "leak" through the feedback it receives from the discriminator. This raises the question of whether Sequential–Foolability can provide guarantees against memorization, and perhaps more importantly, in what sense? To start answering this question part of this work aims to understand the interconnection between the task of Sequential-Fooling and the task of DP–Fooling. Finally, the above questions also motivate our interest in a distribution-independent setting, that avoids assumptions on the distribution preal which we often don’t know. In detail, if we only cared about the resemblence between preal and psyn then we may be content with any algorithm that performs well in practice regardless of whether certain assumptions that we made in the analysis hold or not. But, if we care to obtain guarantees against copying or memorizing, then these should principally hold. And thus we should prefer to obtain our guarantees without too strong assumptions on the distribution preal. 3There are at most 7 · 109 music experts in the world. Hence, by standard concentration inequalities a sample of size roughly 9 2 log 10 suffices to achieve IPM distance at most with high probability. Acknowledgments and Disclosure of Funding R.L is supported by an ISF grant no. 2188/20 and partially funded by an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google. S.M is supported by the Israel Science Foundation (grant No. 1225/20), by an Azrieli Faculty Fellowship, and by a grant from the United States - Israel Binational Science Foundation (BSF). Part of this work was done while the author was at Google Research. Broader Impact There are no foreseen ethical or societal consequences for the research presented herein.
1. What is the focus and contribution of the paper on domain-size independent sample complexity bounds for DP-Fooling? 2. What are the strengths of the proposed approach, particularly in terms of its ability to demonstrate an equivalence between private synthetic data generation and private learning? 3. What are the weaknesses of the paper, especially regarding its theoretical contributions?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper provides the first domain-size independent sample complexity bounds for DP-Fooling, and shows an equivalence between private synthetic data generation and private learning. Besides, this paper introduces and characterizes a new class of SDGs and demonstrates their utility in the construction of private synthetic data. Strengths As an analogy to the relationship between PAC learning and fooling, this paper shows an equivalence between private synthetic generation and private PAC learning. Meanwhile, it shows an equivalence between sequentially-foolability and a finite Littlestone dimension. I think the problems are important theoretical problems in DP and these discoveries are interesting. Weaknesses Although this paper has interesting discoveries, I am not sure of the theoretical contributions of the paper. I think the main theoretical contribution is that it has proved the connection between sequential-foolability and a finite Littlestone dimension, which is kind of intuitive.
NIPS
Title Synthetic Data Generators -- Sequential and Private Abstract We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps nonefficient). Previous work on synthetic data generators focused on the case that the query class D is finite and obtained sample complexity bounds that scale logarithmically with the size |D|. Here we construct a private synthetic data generator whose sample complexity is independent of the domain size, and we replace finiteness with the assumption that D is privately PAC learnable (a formally weaker task, hence we obtain equivalence between the two tasks). 1 Introduction Generating differentially–private synthetic data [9, 14] is a fundamental task in learning that has won considerable attention in the last few years [22, 34, 23, 16]. Formally, given a class D of distinguishing functions, a fooling algorithm receives as input IID samples from an unknown real-life distribution, preal, and outputs a distribution psyn that is -close to preal w.r.t the Integral Probability Metric ([29]), denoted IPMD: IPMD(p, q) = sup d∈D ∣∣∣∣ Ex∼p[d(x)]− Ex∼q[d(x)] ∣∣∣∣ (1) A DP-SDG is then simply defined to be a differentially private fooling algorithm. A fundamental question is then: Which classes D can be privately fooled? In this paper, we focus on sample complexity bounds and give a first such characterization. We prove that a class D is DP–foolable if and only if it is privately (proper) PAC learnable. As a corollary, we obtain equivalence between several important tasks within private learning such as proper PAC Learning [25], Data Release [14], Sanitization [6] and what we will term here Private Uniform Convergence. Much focus has been given to the task of synthetic data generation. Also, several papers [5, 23, 16, 20, 21] discuss the reduction of private fooling to private PAC learning. In contrast with previous work, we assume an arbitrary large domain. In detail, previous existing bounds normally scale logarithmically with the size of the query class D (or alternatively, depend on the size of the domain). Here we initiate a study of the sample complexity that does not assume that the size of the domain is fixed. Instead, we only assume that the class is privately PAC learnable, and obtain sample complexity 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. bounds that are independent of the cardinality |D|. We note that the existence of a private synthetic data generator entails private proper PAC learning, hence our assumption is a necessary condition for the existence of a DP-SDG. The general approach taken for generating synthetic data (which we also follow here) is to exploit an online setup of a sequential game between a generator that aims to fool a discriminator and a discriminator that attempts to distinguish between real and fake data. The utility and generality of this technical method, in the context of privacy, has been observed in several previous works [22, 32, 20]. However, in the finite case, specific on-line algorithms, such as Multiplicative Weights [20] and Follow-the-Perturbed-Leader [37] are considered. The algorithms are then exploited, in a white-box fashion, that allow easy construction of SDGs. The technical challenge we face in this work is to generalize the above technique in order to allow the use of no-regret algorithms that work over infinite classes. Such algorithms don’t necessarily share the attractive traits of MW and FtPL that allow their exploitation for generating synthetic data. To overcome this, we study here a general framework of sequential SDGs and show how an arbitrary online algorithm can be turned, via a Black-box process, into an SDG which in turn can be privatized. We discuss these challenges in more detail in the full version [10]. Thus, the technical workhorse behind our proof is a learning primitive which is of interest of its own right. We term it here Sequential Synthetic Data Generator (Sequential-SDG). Similar frameworks appeared [20, 37] in the context of private-SDGs but also more broadly in the context of generative learning [19, 27, 18, 17]. We further discuss this deep and important connection between private learning and generative learning in Section 5 In the sequential-SDG setting, we consider a sequential game between a generator (player G) and a discriminator (player D). At every iteration, player G proposes a distribution and player D outputs a discriminating function from a prespecified binary class D. The game stops when player G proposes a distribution that is close in IPMD distance to the true target distribution. As we focus on the statistical limits of the model, we ignore the optimization and computational complexity aspects and we assume that both players are omnipotent in terms of their computational power. We provide here characterization of the classes that can be sequentially fooled (i.e. classes D for which we can construct a sequential SDG) and show that the sequentially foolable classes are exactly Littlestone classes [28, 7]. In turn, we harness sequential SDGs to generate synthetic data together with a private discriminator in order to generate private synthetic data. Because this framework assumes only a private learner, we in some sense show that the sequential setting is a canonical method to generate synthetic data. To summarize this work contains several contributions: We provide the first domain-size independent sample complexity bounds for DP-Fooling, and show an equivalence between private synthetic data generation and private learning. Second, we introduce and characterize a new class of SDGs and demonstrate their utility in the construction of private synthetic data. 2 Prelimineries In this section we recall standard definitions and notions in differential privacy and learning (a more extensive background is also given in the full version [10]). Throughout the paper we will study classes D of boolean functions defined on a domain X . However, we will often use a dual point of view where we think of X as the class of functions and on D as the domain. Therefore, in order to avoid confusion, in this section we let W denote the domain and H ⊆ {0, 1}W to denote the functions class. 2.1 Differential Privacy and Private Learning Differential Privacy [13, 12] is a statistical formalism which aims at capturing algorithmic privacy. It concerns with problems whose input contains databases with private records and it enables to design algorithms that are formally guaranteed to protect the private information. For more background see the surveys [15, 35]. The formal definition is as follows: letWm denote the input space. An input instance Ω ∈ Wm is called a database, and two databases Ω′,Ω′′ ∈ Wm are called neighbours if there exists a single i ≤ m such that Ω′i 6= Ω′′i . Let α, β > 0 be the privacy parameters, a randomized algorithm M : Wm → Σ is called (α, β)-differentially private if for every two neighbouring Ω′,Ω′′ ∈ Wm and for every event E ⊆ Σ: Pr [ M(Ω′) ∈ E ] ≤ eα Pr [ M(Ω′′) ∈ E ] + β. An algorithm M : ∪∞m=1Wm → Y is called differentially private if for every m its restriction toWm is (α(m), β(m))-differentially private, where α(m) = O(1) and β(m) is negligible1. Concretely, we will think of α(m) as a small constant (say, 0.1) and β(m) = O(m− logm). Private Learning. We next overview the notion of Differentially private learning algorithms [25]. In this context the input database is the training set of the algorithm. Given a hypothesis classH over a domain W , we say thatH ⊆ {0, 1}W is privately PAC learnable if it can be learned by a differentially private algorithm. That is, if there is a differentially private algorithm M and a sample complexity bound m( , δ) = poly(1/ , 1/δ) such that for every , δ > 0 and every distribution P over W × {0, 1}, if M receives an independent sample S ∼ Pm then it outputs an hypothesis hS such that with probability at least 1− δ: LP(hS) ≤ min h∈H LP(h) + , where LP(h) = E(w,y)∼P [ 1[h(w) 6= y] ] . If M is proper, namely hS ∈ H for every input sample S, thenH is said to be Privately Agnostically and Properly PAC learnable (PAP-PAC-learnable). In some of our proofs it will be convenient to consider private learning algorithms whose privacy parameter α satisfies α ≤ 1 (rather than α = O(1) as in the definition of private algorithms). This can be done without loss of generality due to privacy amplification theorems (see, for example (similar, for example [35] (Definition 8.2) and references within (see also the full version [10] for further details). Sanitization. The notion of sanitization has been introduced by (author?) [9] and further studied in [6]. LetH ⊆ {0, 1}W be a class of functions. An ( , δ, α, β,m)-sanitizer forH is an (α, β)-private algorithm M that receives as an input a sample S ∈ Wm and outputs a function Est : H → [0, 1] such that with probability at least 1− δ, (∀h ∈ H) : ∣∣∣Est(h)− |{w ∈ S : h(w) = 1}||S| ∣∣∣ ≤ . We say that H is sanitizable if there exists an algorithm M and a bound m( , δ) = poly(1/ , 1/δ) such that for every , δ > 0, the restriction of M to samples of any size m ≥ m( , δ) is an ( , δ, α, β,m)-sanitizer forH with α = α(m) = O(1) and β = β(m) negligible. Private Uniform Convergence. A basic concept in Statistical Learning Theory is the notion of uniform convergence. In a nutshell, a class of hypothesesH satisfies the uniform convergence property if for any unknown distribution P over examples, one can uniformly estimate the expected losses of all hypotheses in H given a large enough sample from P . Uniform convergence and statistical learning are closely related. For example, the Fundamental Theorem of PAC Learning asserts that they are equivalent for binary-classification [33]. This notion extends to the setting of private learning: a class H satisfies the Private Uniform Convergence property if there exists a differentially private algorithm M and a sample complexity bound m( , δ) = poly(1/ , 1/δ) such that for every distribution P overW × {0, 1} the following holds: if M is given an input sample S of size at least m( , δ) which is drawn independently from P, then it outputs an estimator L̂ : H → [0, 1] such that with probability at least (1− δ) it holds that (∀h ∈ H) : ∣∣L̂(h)− LP(h)∣∣ ≤ . Note that without the privacy restriction, the estimator L̂(h) = LS(h) := |{(wi, yi) ∈ S : h(wi) 6= yi}| |S| satisfies the requirement for m = Õ(d/ 2), where d is the VC-dimension ofH; this follows by the celebrated VC-Theorem [36, 33]. 1I.e. β(m) = o(m−k) for every k > 0. 3 Problem Setup We assume a domain X and we let D ⊆ {0, 1}X be a class of functions over X . The class D is referred to as the discriminating functions class and its members d ∈ D are called discriminating functions or distinguishers. We let ∆(X ) denote the space of distributions over X . Given two distributions p, q ∈ ∆(X ), let IPMD(p, q) denote the IPM distance between p and q as in Eq. (1). It will be convenient to assume thatD is symmetric, i.e. that whenever d ∈ D then also its complement, 1− d ∈ D. Assuming that D is symmetric will not lose generality and will help simplify notations. We will also use the following shorthand: given a distribution p and a distinguisher d we will often write p(d) := E x∼p [d(x)]. Under this assumption and notation we can remove the absolute value from the definition of IPM: IPMD(p, q) = sup d∈D (p(d)− q(d)) . (2) 3.1 Synthetic Data Generators A synthetic data generator (SDG), without additional constraints, is defined as follows Definition 1 (SDG). An SDG, or a fooling algorithm, for D with sample complexity m( , δ) is an algorithm M that receives as input a sample S of points from X and parameters , δ such that the following holds: for every , δ > 0 and every target distribution preal, if S is an independent sample of size at least m( , δ) from preal then Pr [ IPMD(psyn, preal) < ] ≥ 1− δ, where psyn := M(S) is the distribution outputted by M , and the probability is taken over S ∼ (preal) m as well as over the randomness of M . We will say that a class is foolable if it can be fooled by an SDG algorithm whose sample complexity is poly(1 , 1 δ ). Foolability, without further constraints, comes with the following characterization which is an immediate corollary (or rather a reformulation) of the celebrated VC Theorem ([36]). Denote by Memp an algorithm that receives a sample S and returns Memp(S) := pS , the empirical distribution over S. Observation 1 ([36]). The following statements are equivalent for a class D ⊆ {0, 1}X : 1. D is PAC–learnable. 2. D is foolable. 3. D satisfies the uniform convergence property. 4. D has a finite VC-dimension. 5. Memp is a fooling algorithm for D with sample complexity m = O( log 1/δ 2 ). Observation 1 shows that foolability is equivalent to PAC-learnability (and in turn to finite VC dimension). We will later see analogous results for DP–Foolability (which is equivalent to differentially private PAC learnability) and Sequential–Foolability (which is equivalent to online learnability). We now discuss the two fundamental models that are the focus of this work – DP–Foolability and Sequential–Foolability. 3.2 DP–Synthetic Data Generators We next introduce the notion of a DP–synthetic data generator and DP–Foolability. As discussed, DP-SDGs have been the focus of study of several papers [9, 14, 22, 34, 23, 16]. Definition 2 (DP-SDG). A DP-SDG, or a DP-fooling algorithm M for a class D is an algorithm that receives as an input a finite sample S and two parameters ( , δ) and satisfies: • Differential Privacy. For every m, the restriction of M to input samples S of size m is (α(m), β(m))-differentially private, where α(m) = O(1) and β(m) is negligible. • Fooling. M fools D: there exists a sample complexity bound m = m( , δ) such that for every target distribution preal if S is a sample of at least m examples from preal then IPMD(psyn, preal) ≤ with probability at least 1− δ, where psyn is the output of M on the input sample S. We will say in short that a class D is DP– Foolable if there exists a DP-SDG for the class D with sample complexity m = poly(1/ , 1/δ). 3.3 Sequential–Synthetic Data Generators We now describe the second model of foolability which, as discussed, is the technical engine behind our proof of equivalence between DP-foolability and DP-learning. Sequential-SDGs A Sequential-SDG can be thought of as a sequential game between two players called the generator (denoted by G) and the discriminator (denoted by D). At the beginning of the game, the discriminator D receives the target distribution which is denoted by preal. The goal of the generator G is to find a distribution p such that p and preal are -indistinguishable with respect to some prespecified discriminating class D and an error parameter > 0, i.e. IPMD(p, preal) ≤ . We note that both players know D and . The game proceeds in rounds, where in each round t the generator G submits to the discriminator a candidate distribution pt and the discriminator replies according to the following rule: if IPMD(pt, preal) ≤ then the discriminator replies “WIN” and the game terminates. Else, the discriminator picks dt ∈ D such that |preal(dt)− pt(dt)| > , and sends dt to the generator along with a bit which indicates whether pt(dt) > preal(dt) or pt(dt) < preal(dt). Equivalently, instead of transmitting an extra bit, we assume that the discriminator always sends dt ∈ D ∪ (1−D) s.t. preal(dt)− pt(dt) > . (3) Definition 3 (Sequential–Foolability). Let > 0 and let D be a discriminating class. 1. D is called -Sequential–Foolable if there exists a generator G and a bound T = T ( ) such that G wins any discriminator D with any target distribution preal after at most T rounds. 2. The round complexity of Sequential–Fooling D is defined as the minimal upper bound T ( ) on the number of rounds that suffice to –Fool D. 3. D is called Sequential–Foolable if it is -Sequential foolable for every > 0 with T ( ) = poly(1/ ). In the next section we will see that if D is -Sequential–Foolabe for some fixed < 1/2 then it is Sequential–Foolable with round complexity T ( ) = O(1/ 2). 4 Results Our main result characterizes DP–Foolability in terms of basic notions from differential privacy and PAC learning. Theorem 1 (Characterization of DP–Fooling). The following statements are equivalent for a class D ⊆ {0, 1}X : 1. D is privately and properly learnable in the agnostic PAC setting. 2. D is DP–Foolable. 3. D is sanitizable. 4. D satisfies the private uniform convergence property. Theorem 1 shows a qualitative equivalence between the relevant four notions, quantitative bounds on the entailed sample complexity are provided in the full version [10]. The implication Item 3 =⇒ Item 1 was known prior to this work and was proven in [6] (albeit the pure case). The equivalence among Items 2 to 4 is natural and expected. Indeed, each of them expresses the existence of a private algorithm that publishes, privately, certain estimates of all functions in D. The fact that Item 1 implies the other three items is perhaps more surprising, and the main contribution of this work, and we show that Item 1 implies Item 2. Our proof of that exploits the Sequential framework. In a nutshell, we observe that a class that is both sequentially foolable and privately pac learnable is also DP-foolable: this result follows by constructing a sequential SDG that with a private discriminator, that is assumed to exists, combined with standard compositional and preprocessing arguments regarding the privacy of the generators output. Thus to prove the implication we only need to show that private PAC learning implies sequential foolability. This result follows from Corollary 2 that provides characterization of sequential foolable classes as well as a recent result by (author?) [1] that shows that private PAC learnable classes have finite Littlestone dimension. See the full version [10] for a complete proof. Private learnability versus private uniform convergence. The equivalence Item 1 ⇐⇒ Item 4 is between private learning and private uniform convergence. The non-private analogue of this equivalence is a cornerstone in statistical learning; it reduces the statistical challenge of minimizing an unknown population loss to an optimization problem of minimizing a known empirical estimate. In particular, it yields the celebrated Empirical Risk Minimization (ERM) principle: “Output h ∈ H that minimizes the empirical loss”. We therefore highlight this equivalence in the following corollary: Corollary 1 (Private proper learning = private uniform convergence). Let H ⊆ {0, 1}X . Then H is privately and properly PAC learnable if and only ifH satisfies the private uniform convergence property. Sequential–SDGs We next describe our characterization of Sequential-SDGs. As discussed, this characterization is the technical heart behind the equivalence between private PAC learning and DP-foolability. Nevertheless we believe that it may be of interest of its own right. We thus provide quantitative upper and lower bounds on the round complexity of Sequential-SDGs in terms of the Littlestone dimension (see [7] or the full version [10] for the exact definition). Theorem 2 (Quantitative round-complexity bounds). Let D be a discriminating class with dual Littlestone dimension `∗ and let T ( ) denote the round complexity of Sequential–Fooling D. Then, 1. T ( ) = O ( `∗ 2 log `∗ ) for every . 2. T ( ) ≥ ` ∗ 2 for every < 1 2 . It would be interesting to close the gap between the two bounds in terms of > 0, and we leave it for future work. To prove Item 1 we construct a generator with winning strategy which we outline in the full version [10]. A complete proof of Theorem 2 appears in the full version [10]. As a corollary we get the following characterization of Sequential–Foolability: Corollary 2 (Characterization of Sequential–Foolability). The following are equivalent for D ⊆ {0, 1}X : 1. D is Sequential–Foolable. 2. D is -Sequential–Foolable for some < 1/2. 3. D has a finite dual Littlestone dimension. 4. D has a finite Littlestone dimension. Corollary 2 follows directly from Theorem 2 (which gives the equivalences 1 ⇐⇒ 2 ⇐⇒ 3) and from [8] (which gives the equivalence 3 ⇐⇒ 4, see the full version [10] for further detail). Tightness of = 12 . The implication Item 2 =⇒ Item 1 can be seen as a boosting result: i.e. “weak” foolability for some fixed < 1/2 implies “strong” foolability for every . The following example demonstrates that the dependence on in Item 2 can not be improved beyond 12 : let X be the unit circle in R2, and let D consist of all arcs whose length is exactly half of the circumference. It is easy to verify that the uniform distribution µ over X satisfies IPMD(µ, preal) ≤ 12 for any target distribution preal (since µ(d) = 12 for all d ∈ D). Therefore D is ( = 1 2 )-Sequential–Foolable with round complexity T ( 12 ) = 1. On the other hand, D has an infinite Littlestone dimension and therefore is not Sequential–Foolable. Sequential-SDGs versus DP-SDGs So far we have introduced and characterized two formal setups for synthetic data generation. It is therefore natural to compare and seek connections between these two frameworks. We first note that the DP setting may only be more restrictive than the Sequential setting: Corollary 3 (DP–Foolability implies Sequential–Foolability). Let D be a class that is DP–Foolable. Then D has finite Littlestone dimension and in particular is Sequential–Foolable. Corollary 3 follows from Theorem 1: indeed, the latter yields that DP–Foolability is equivalent to Private agnostic proper -PAC learnability (PAP-PAC), and by [1] PAP-PAC learnability implies a finite Littlestone dimension which by Corollary 2 implies Sequential–Foolability. Towards a converse of Corollary 3. By the above it follows that the family of classes D that can be fooled by a DP algorithm is contained in the family of all Sequential–Foolable classes; specifically, those which admit a Sequential-SDG with a differentially private discriminator. We do not know whether the converse holds; i.e. whether “Sequential–Foolability =⇒ DP– Foolability”. Nevertheless, the implication “PAP-PAC learnability =⇒ DP–Foolability” (Theorem 1) can be regarded as an intermediate step towards this converse. Indeed, as discussed above, PAP-PAC learnablity implies Sequential–Foolablility. It is therefore natural to consider the following question, which is equivalent2 to the converse of Corollary 3: Question 1. Let D be a class that has finite Littlestone dimension. Is D properly and privately learnable in the agnostic PAC setting? A weaker form of this question – Whether every Littlestone class is privately PAC Learnable? – was posed by [1] as an open question (and was recently resolved in [11]). 5 Discussion In this work we develop a theory for two types of constrained-SDG, sequential and private. Let us now discuss SDGs more generally, and we broadly want to consider algorithms that observe data, sampled from some real-life distribution, and in turn generate new synthetic examples that resemble real-life samples, without any a-priori constraints. For example, consider an algorithm that receives as input some tunes from a specific music genre (e.g. jazz, rock, pop) and then outputs a new tune. Recently, there has been a remarkable breakthrough in the the construction of such SDGs with the introduction of the algorithmic frameworks of Generative Adversarial Networks (GANs) [18, 17], as well as Variational AutoEncoders (VAE) [26, 31]. In turn, the use of SDGs has seen many potential applications [24, 30, 38]. Here we follow a common interpretation of SDGs as IPM minimizers [2, 4]. However, it was also observed [2, 3] that there is a critical gap between the task of generating new synthetic data (such as new tunes) and the IPM minimization problem: In detail, Observation 1 shows that the IPM framework allows certain “bad" solutions that memorize. Specifically, let S be a sufficiently large independent sample from the target distribution and consider the empirical distribution as a candidate solution to the IPM minimization problem. Then, with high probability, the IPM distance between the empirical and the target distribution vanishes as |S| grows. To illustrate the problem, imagine that our goal is to generate new jazz tunes. Let us consider the discriminating class of all human music experts. The solution suggested above uses the empirical 2I.e. an affirmative answer to Question 1 is equivalent to the converse of Corollary 3. distribution and simply “generates" a tune from the training set3. This clearly misses the goal of generating new and original tunes but the IPM distance minimization framework does not discard this solution. For this reason we often invoke further restrictions on the SDG and consider constrainedSDGs. For example, [4] suggests to restrict the class of possible outputs psyn and shows that, under certain assumptions on the distribution preal, the right choice of class D leads to learning the true underlying distribution (in Wasserstein distance). In this work we explored two other types of constrained-SDGs, DP–SDGs and Sequential–SDGs, and we characterized the foolable classes in a distribution independent model, i.e. without making assumptions on the distribution preal. One motivation for studying these models, as well as the interest in a distribution independent setting, is the following underlying question: The output of Synthetic Data Generators should be new examples. But in what sense we require the output to be novel or distinct from the training set? How and in what sense we should avoid copying the training data or even outputting a memorized version of it? To answer such questions is of practical importance. For example, consider a company that wishes to automatically generate music or images to be used commercially. One approach could be to train an SDG, and then sell the generated output. What can we say about the output of SDGs in this context? Are the images generated by the SDG original? Are they copying the data? or breaching copyright? In this context, the differentially private setup comes with a very attractive interpretation that provides further motivation to study DP-SDGs, beyond preserving privacy of the dataset. To illustrate our interpretation of differential privacy as a criterion for originality consider the following situation: imagine that Lisa is a learning painter. She has learned to paint by observing samples of painting, produced by a mentor painter Mona. After a learning process, she draws a new painting L. Mona agrees that this new painting is a valid work of art, but Mona claims the result is not an original painting but a mere copy of a painting, say M , produced by Mona. How can Lisa argue that paint L is not a plagiary? The easiest argument would be that she had never observed M . However, this line of defence is not always realistic as she must observe some paintings. Instead, we will argue using the following thought experiment: What if Lisa never observed M? Might she still create L? If we could prove that this is the case, then one could argue similarly that L is not a palgiary. The last argument is captured by the notion of differential privacy. In a nutshell, a randomized algorithm that receives a sequence of data points x̄ as input is differentially private if removing/replacing a single data point in its input, does not affect its output y by much; more accurately, for any event E over the output y that has non-negligible probability on input x̄, then the probability remains non-negligible even after modifying one data point in x̄. The sequential setting also comes with an appealing interpretation in this context. A remarkable property of existing SDGs (e.g. GANs), that potentially reduces the likeliness of memorization, is that the generator’s access to the sample is masked. In more detail, the generator only has restricted access to the training set via feedback from a discriminator that observes real data vs. synthetic data. Thus, potentially, the generator may avoid degenerate solutions that memorize. Nevertheless, even though the generator is not given a direct access to the training data, it could still be that information about this data could "leak" through the feedback it receives from the discriminator. This raises the question of whether Sequential–Foolability can provide guarantees against memorization, and perhaps more importantly, in what sense? To start answering this question part of this work aims to understand the interconnection between the task of Sequential-Fooling and the task of DP–Fooling. Finally, the above questions also motivate our interest in a distribution-independent setting, that avoids assumptions on the distribution preal which we often don’t know. In detail, if we only cared about the resemblence between preal and psyn then we may be content with any algorithm that performs well in practice regardless of whether certain assumptions that we made in the analysis hold or not. But, if we care to obtain guarantees against copying or memorizing, then these should principally hold. And thus we should prefer to obtain our guarantees without too strong assumptions on the distribution preal. 3There are at most 7 · 109 music experts in the world. Hence, by standard concentration inequalities a sample of size roughly 9 2 log 10 suffices to achieve IPM distance at most with high probability. Acknowledgments and Disclosure of Funding R.L is supported by an ISF grant no. 2188/20 and partially funded by an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google. S.M is supported by the Israel Science Foundation (grant No. 1225/20), by an Azrieli Faculty Fellowship, and by a grant from the United States - Israel Binational Science Foundation (BSF). Part of this work was done while the author was at Google Research. Broader Impact There are no foreseen ethical or societal consequences for the research presented herein.
1. What are the main contributions and key findings of the paper regarding the relationship between various properties of predicate classes? 2. How does the paper extend previous research on privately and properly learnable classes of predicates? 3. What are the implications of the paper's results for real-world applications in machine learning, particularly concerning differentially private synthetic data generation and estimation algorithms? 4. Are there any limitations or potential challenges in applying the paper's theoretical findings to practical scenarios? If so, what are they?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This highly theoretical paper investigates the conceptual relationship between a series of properties of a class of predicates (boolean-valued functions). These properties are: 1) Whether the class of predicates is privately and properly learnable via probably approximately correct learning, 2) Whether differentially private synthetic data generation is possible that produces synthetic data that is on average close to the real data as measured by all predicates of the class, 3) Whether a differentially private estimation algorithm exists that can produce good estimates for the fraction of samples that satisfy each predicate, 4) Whether there is a differentially private algorithm that can produce estimators in the sense of uniform convergence. The main result of the paper is to show that these properties (1 to 4) are equivalent for all classes of predicates . Strengths This paper is theoretically very well grounded. It explores the relationship between seemingly different properties of classes of predicates, which generalizes to plenty of tasks we might want to tackle via machine learning. Thus, I think the paper is relevant to the community. To the best of my knowledge, this result is novel. Weaknesses I'm not quite sure how relevant the findings are in any practical sense; the "equivalence" shown here can be quite different from equivalent in practice, due to prohibitively large constant factors. For all machine learning tasks I am aware of, even small constants make a significant difference and finding any reasonable utility-privacy trade-off is already difficult. I think the paper could benefit from making these relations more explicit. While subsampling does indeed allow for better privacy bounds, it also requires a much larger dataset to subsample from and at least in practice there are massive hurdles to, say, asking for a 1000 times larger dataset.
NIPS
Title Synthetic Data Generators -- Sequential and Private Abstract We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps nonefficient). Previous work on synthetic data generators focused on the case that the query class D is finite and obtained sample complexity bounds that scale logarithmically with the size |D|. Here we construct a private synthetic data generator whose sample complexity is independent of the domain size, and we replace finiteness with the assumption that D is privately PAC learnable (a formally weaker task, hence we obtain equivalence between the two tasks). 1 Introduction Generating differentially–private synthetic data [9, 14] is a fundamental task in learning that has won considerable attention in the last few years [22, 34, 23, 16]. Formally, given a class D of distinguishing functions, a fooling algorithm receives as input IID samples from an unknown real-life distribution, preal, and outputs a distribution psyn that is -close to preal w.r.t the Integral Probability Metric ([29]), denoted IPMD: IPMD(p, q) = sup d∈D ∣∣∣∣ Ex∼p[d(x)]− Ex∼q[d(x)] ∣∣∣∣ (1) A DP-SDG is then simply defined to be a differentially private fooling algorithm. A fundamental question is then: Which classes D can be privately fooled? In this paper, we focus on sample complexity bounds and give a first such characterization. We prove that a class D is DP–foolable if and only if it is privately (proper) PAC learnable. As a corollary, we obtain equivalence between several important tasks within private learning such as proper PAC Learning [25], Data Release [14], Sanitization [6] and what we will term here Private Uniform Convergence. Much focus has been given to the task of synthetic data generation. Also, several papers [5, 23, 16, 20, 21] discuss the reduction of private fooling to private PAC learning. In contrast with previous work, we assume an arbitrary large domain. In detail, previous existing bounds normally scale logarithmically with the size of the query class D (or alternatively, depend on the size of the domain). Here we initiate a study of the sample complexity that does not assume that the size of the domain is fixed. Instead, we only assume that the class is privately PAC learnable, and obtain sample complexity 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. bounds that are independent of the cardinality |D|. We note that the existence of a private synthetic data generator entails private proper PAC learning, hence our assumption is a necessary condition for the existence of a DP-SDG. The general approach taken for generating synthetic data (which we also follow here) is to exploit an online setup of a sequential game between a generator that aims to fool a discriminator and a discriminator that attempts to distinguish between real and fake data. The utility and generality of this technical method, in the context of privacy, has been observed in several previous works [22, 32, 20]. However, in the finite case, specific on-line algorithms, such as Multiplicative Weights [20] and Follow-the-Perturbed-Leader [37] are considered. The algorithms are then exploited, in a white-box fashion, that allow easy construction of SDGs. The technical challenge we face in this work is to generalize the above technique in order to allow the use of no-regret algorithms that work over infinite classes. Such algorithms don’t necessarily share the attractive traits of MW and FtPL that allow their exploitation for generating synthetic data. To overcome this, we study here a general framework of sequential SDGs and show how an arbitrary online algorithm can be turned, via a Black-box process, into an SDG which in turn can be privatized. We discuss these challenges in more detail in the full version [10]. Thus, the technical workhorse behind our proof is a learning primitive which is of interest of its own right. We term it here Sequential Synthetic Data Generator (Sequential-SDG). Similar frameworks appeared [20, 37] in the context of private-SDGs but also more broadly in the context of generative learning [19, 27, 18, 17]. We further discuss this deep and important connection between private learning and generative learning in Section 5 In the sequential-SDG setting, we consider a sequential game between a generator (player G) and a discriminator (player D). At every iteration, player G proposes a distribution and player D outputs a discriminating function from a prespecified binary class D. The game stops when player G proposes a distribution that is close in IPMD distance to the true target distribution. As we focus on the statistical limits of the model, we ignore the optimization and computational complexity aspects and we assume that both players are omnipotent in terms of their computational power. We provide here characterization of the classes that can be sequentially fooled (i.e. classes D for which we can construct a sequential SDG) and show that the sequentially foolable classes are exactly Littlestone classes [28, 7]. In turn, we harness sequential SDGs to generate synthetic data together with a private discriminator in order to generate private synthetic data. Because this framework assumes only a private learner, we in some sense show that the sequential setting is a canonical method to generate synthetic data. To summarize this work contains several contributions: We provide the first domain-size independent sample complexity bounds for DP-Fooling, and show an equivalence between private synthetic data generation and private learning. Second, we introduce and characterize a new class of SDGs and demonstrate their utility in the construction of private synthetic data. 2 Prelimineries In this section we recall standard definitions and notions in differential privacy and learning (a more extensive background is also given in the full version [10]). Throughout the paper we will study classes D of boolean functions defined on a domain X . However, we will often use a dual point of view where we think of X as the class of functions and on D as the domain. Therefore, in order to avoid confusion, in this section we let W denote the domain and H ⊆ {0, 1}W to denote the functions class. 2.1 Differential Privacy and Private Learning Differential Privacy [13, 12] is a statistical formalism which aims at capturing algorithmic privacy. It concerns with problems whose input contains databases with private records and it enables to design algorithms that are formally guaranteed to protect the private information. For more background see the surveys [15, 35]. The formal definition is as follows: letWm denote the input space. An input instance Ω ∈ Wm is called a database, and two databases Ω′,Ω′′ ∈ Wm are called neighbours if there exists a single i ≤ m such that Ω′i 6= Ω′′i . Let α, β > 0 be the privacy parameters, a randomized algorithm M : Wm → Σ is called (α, β)-differentially private if for every two neighbouring Ω′,Ω′′ ∈ Wm and for every event E ⊆ Σ: Pr [ M(Ω′) ∈ E ] ≤ eα Pr [ M(Ω′′) ∈ E ] + β. An algorithm M : ∪∞m=1Wm → Y is called differentially private if for every m its restriction toWm is (α(m), β(m))-differentially private, where α(m) = O(1) and β(m) is negligible1. Concretely, we will think of α(m) as a small constant (say, 0.1) and β(m) = O(m− logm). Private Learning. We next overview the notion of Differentially private learning algorithms [25]. In this context the input database is the training set of the algorithm. Given a hypothesis classH over a domain W , we say thatH ⊆ {0, 1}W is privately PAC learnable if it can be learned by a differentially private algorithm. That is, if there is a differentially private algorithm M and a sample complexity bound m( , δ) = poly(1/ , 1/δ) such that for every , δ > 0 and every distribution P over W × {0, 1}, if M receives an independent sample S ∼ Pm then it outputs an hypothesis hS such that with probability at least 1− δ: LP(hS) ≤ min h∈H LP(h) + , where LP(h) = E(w,y)∼P [ 1[h(w) 6= y] ] . If M is proper, namely hS ∈ H for every input sample S, thenH is said to be Privately Agnostically and Properly PAC learnable (PAP-PAC-learnable). In some of our proofs it will be convenient to consider private learning algorithms whose privacy parameter α satisfies α ≤ 1 (rather than α = O(1) as in the definition of private algorithms). This can be done without loss of generality due to privacy amplification theorems (see, for example (similar, for example [35] (Definition 8.2) and references within (see also the full version [10] for further details). Sanitization. The notion of sanitization has been introduced by (author?) [9] and further studied in [6]. LetH ⊆ {0, 1}W be a class of functions. An ( , δ, α, β,m)-sanitizer forH is an (α, β)-private algorithm M that receives as an input a sample S ∈ Wm and outputs a function Est : H → [0, 1] such that with probability at least 1− δ, (∀h ∈ H) : ∣∣∣Est(h)− |{w ∈ S : h(w) = 1}||S| ∣∣∣ ≤ . We say that H is sanitizable if there exists an algorithm M and a bound m( , δ) = poly(1/ , 1/δ) such that for every , δ > 0, the restriction of M to samples of any size m ≥ m( , δ) is an ( , δ, α, β,m)-sanitizer forH with α = α(m) = O(1) and β = β(m) negligible. Private Uniform Convergence. A basic concept in Statistical Learning Theory is the notion of uniform convergence. In a nutshell, a class of hypothesesH satisfies the uniform convergence property if for any unknown distribution P over examples, one can uniformly estimate the expected losses of all hypotheses in H given a large enough sample from P . Uniform convergence and statistical learning are closely related. For example, the Fundamental Theorem of PAC Learning asserts that they are equivalent for binary-classification [33]. This notion extends to the setting of private learning: a class H satisfies the Private Uniform Convergence property if there exists a differentially private algorithm M and a sample complexity bound m( , δ) = poly(1/ , 1/δ) such that for every distribution P overW × {0, 1} the following holds: if M is given an input sample S of size at least m( , δ) which is drawn independently from P, then it outputs an estimator L̂ : H → [0, 1] such that with probability at least (1− δ) it holds that (∀h ∈ H) : ∣∣L̂(h)− LP(h)∣∣ ≤ . Note that without the privacy restriction, the estimator L̂(h) = LS(h) := |{(wi, yi) ∈ S : h(wi) 6= yi}| |S| satisfies the requirement for m = Õ(d/ 2), where d is the VC-dimension ofH; this follows by the celebrated VC-Theorem [36, 33]. 1I.e. β(m) = o(m−k) for every k > 0. 3 Problem Setup We assume a domain X and we let D ⊆ {0, 1}X be a class of functions over X . The class D is referred to as the discriminating functions class and its members d ∈ D are called discriminating functions or distinguishers. We let ∆(X ) denote the space of distributions over X . Given two distributions p, q ∈ ∆(X ), let IPMD(p, q) denote the IPM distance between p and q as in Eq. (1). It will be convenient to assume thatD is symmetric, i.e. that whenever d ∈ D then also its complement, 1− d ∈ D. Assuming that D is symmetric will not lose generality and will help simplify notations. We will also use the following shorthand: given a distribution p and a distinguisher d we will often write p(d) := E x∼p [d(x)]. Under this assumption and notation we can remove the absolute value from the definition of IPM: IPMD(p, q) = sup d∈D (p(d)− q(d)) . (2) 3.1 Synthetic Data Generators A synthetic data generator (SDG), without additional constraints, is defined as follows Definition 1 (SDG). An SDG, or a fooling algorithm, for D with sample complexity m( , δ) is an algorithm M that receives as input a sample S of points from X and parameters , δ such that the following holds: for every , δ > 0 and every target distribution preal, if S is an independent sample of size at least m( , δ) from preal then Pr [ IPMD(psyn, preal) < ] ≥ 1− δ, where psyn := M(S) is the distribution outputted by M , and the probability is taken over S ∼ (preal) m as well as over the randomness of M . We will say that a class is foolable if it can be fooled by an SDG algorithm whose sample complexity is poly(1 , 1 δ ). Foolability, without further constraints, comes with the following characterization which is an immediate corollary (or rather a reformulation) of the celebrated VC Theorem ([36]). Denote by Memp an algorithm that receives a sample S and returns Memp(S) := pS , the empirical distribution over S. Observation 1 ([36]). The following statements are equivalent for a class D ⊆ {0, 1}X : 1. D is PAC–learnable. 2. D is foolable. 3. D satisfies the uniform convergence property. 4. D has a finite VC-dimension. 5. Memp is a fooling algorithm for D with sample complexity m = O( log 1/δ 2 ). Observation 1 shows that foolability is equivalent to PAC-learnability (and in turn to finite VC dimension). We will later see analogous results for DP–Foolability (which is equivalent to differentially private PAC learnability) and Sequential–Foolability (which is equivalent to online learnability). We now discuss the two fundamental models that are the focus of this work – DP–Foolability and Sequential–Foolability. 3.2 DP–Synthetic Data Generators We next introduce the notion of a DP–synthetic data generator and DP–Foolability. As discussed, DP-SDGs have been the focus of study of several papers [9, 14, 22, 34, 23, 16]. Definition 2 (DP-SDG). A DP-SDG, or a DP-fooling algorithm M for a class D is an algorithm that receives as an input a finite sample S and two parameters ( , δ) and satisfies: • Differential Privacy. For every m, the restriction of M to input samples S of size m is (α(m), β(m))-differentially private, where α(m) = O(1) and β(m) is negligible. • Fooling. M fools D: there exists a sample complexity bound m = m( , δ) such that for every target distribution preal if S is a sample of at least m examples from preal then IPMD(psyn, preal) ≤ with probability at least 1− δ, where psyn is the output of M on the input sample S. We will say in short that a class D is DP– Foolable if there exists a DP-SDG for the class D with sample complexity m = poly(1/ , 1/δ). 3.3 Sequential–Synthetic Data Generators We now describe the second model of foolability which, as discussed, is the technical engine behind our proof of equivalence between DP-foolability and DP-learning. Sequential-SDGs A Sequential-SDG can be thought of as a sequential game between two players called the generator (denoted by G) and the discriminator (denoted by D). At the beginning of the game, the discriminator D receives the target distribution which is denoted by preal. The goal of the generator G is to find a distribution p such that p and preal are -indistinguishable with respect to some prespecified discriminating class D and an error parameter > 0, i.e. IPMD(p, preal) ≤ . We note that both players know D and . The game proceeds in rounds, where in each round t the generator G submits to the discriminator a candidate distribution pt and the discriminator replies according to the following rule: if IPMD(pt, preal) ≤ then the discriminator replies “WIN” and the game terminates. Else, the discriminator picks dt ∈ D such that |preal(dt)− pt(dt)| > , and sends dt to the generator along with a bit which indicates whether pt(dt) > preal(dt) or pt(dt) < preal(dt). Equivalently, instead of transmitting an extra bit, we assume that the discriminator always sends dt ∈ D ∪ (1−D) s.t. preal(dt)− pt(dt) > . (3) Definition 3 (Sequential–Foolability). Let > 0 and let D be a discriminating class. 1. D is called -Sequential–Foolable if there exists a generator G and a bound T = T ( ) such that G wins any discriminator D with any target distribution preal after at most T rounds. 2. The round complexity of Sequential–Fooling D is defined as the minimal upper bound T ( ) on the number of rounds that suffice to –Fool D. 3. D is called Sequential–Foolable if it is -Sequential foolable for every > 0 with T ( ) = poly(1/ ). In the next section we will see that if D is -Sequential–Foolabe for some fixed < 1/2 then it is Sequential–Foolable with round complexity T ( ) = O(1/ 2). 4 Results Our main result characterizes DP–Foolability in terms of basic notions from differential privacy and PAC learning. Theorem 1 (Characterization of DP–Fooling). The following statements are equivalent for a class D ⊆ {0, 1}X : 1. D is privately and properly learnable in the agnostic PAC setting. 2. D is DP–Foolable. 3. D is sanitizable. 4. D satisfies the private uniform convergence property. Theorem 1 shows a qualitative equivalence between the relevant four notions, quantitative bounds on the entailed sample complexity are provided in the full version [10]. The implication Item 3 =⇒ Item 1 was known prior to this work and was proven in [6] (albeit the pure case). The equivalence among Items 2 to 4 is natural and expected. Indeed, each of them expresses the existence of a private algorithm that publishes, privately, certain estimates of all functions in D. The fact that Item 1 implies the other three items is perhaps more surprising, and the main contribution of this work, and we show that Item 1 implies Item 2. Our proof of that exploits the Sequential framework. In a nutshell, we observe that a class that is both sequentially foolable and privately pac learnable is also DP-foolable: this result follows by constructing a sequential SDG that with a private discriminator, that is assumed to exists, combined with standard compositional and preprocessing arguments regarding the privacy of the generators output. Thus to prove the implication we only need to show that private PAC learning implies sequential foolability. This result follows from Corollary 2 that provides characterization of sequential foolable classes as well as a recent result by (author?) [1] that shows that private PAC learnable classes have finite Littlestone dimension. See the full version [10] for a complete proof. Private learnability versus private uniform convergence. The equivalence Item 1 ⇐⇒ Item 4 is between private learning and private uniform convergence. The non-private analogue of this equivalence is a cornerstone in statistical learning; it reduces the statistical challenge of minimizing an unknown population loss to an optimization problem of minimizing a known empirical estimate. In particular, it yields the celebrated Empirical Risk Minimization (ERM) principle: “Output h ∈ H that minimizes the empirical loss”. We therefore highlight this equivalence in the following corollary: Corollary 1 (Private proper learning = private uniform convergence). Let H ⊆ {0, 1}X . Then H is privately and properly PAC learnable if and only ifH satisfies the private uniform convergence property. Sequential–SDGs We next describe our characterization of Sequential-SDGs. As discussed, this characterization is the technical heart behind the equivalence between private PAC learning and DP-foolability. Nevertheless we believe that it may be of interest of its own right. We thus provide quantitative upper and lower bounds on the round complexity of Sequential-SDGs in terms of the Littlestone dimension (see [7] or the full version [10] for the exact definition). Theorem 2 (Quantitative round-complexity bounds). Let D be a discriminating class with dual Littlestone dimension `∗ and let T ( ) denote the round complexity of Sequential–Fooling D. Then, 1. T ( ) = O ( `∗ 2 log `∗ ) for every . 2. T ( ) ≥ ` ∗ 2 for every < 1 2 . It would be interesting to close the gap between the two bounds in terms of > 0, and we leave it for future work. To prove Item 1 we construct a generator with winning strategy which we outline in the full version [10]. A complete proof of Theorem 2 appears in the full version [10]. As a corollary we get the following characterization of Sequential–Foolability: Corollary 2 (Characterization of Sequential–Foolability). The following are equivalent for D ⊆ {0, 1}X : 1. D is Sequential–Foolable. 2. D is -Sequential–Foolable for some < 1/2. 3. D has a finite dual Littlestone dimension. 4. D has a finite Littlestone dimension. Corollary 2 follows directly from Theorem 2 (which gives the equivalences 1 ⇐⇒ 2 ⇐⇒ 3) and from [8] (which gives the equivalence 3 ⇐⇒ 4, see the full version [10] for further detail). Tightness of = 12 . The implication Item 2 =⇒ Item 1 can be seen as a boosting result: i.e. “weak” foolability for some fixed < 1/2 implies “strong” foolability for every . The following example demonstrates that the dependence on in Item 2 can not be improved beyond 12 : let X be the unit circle in R2, and let D consist of all arcs whose length is exactly half of the circumference. It is easy to verify that the uniform distribution µ over X satisfies IPMD(µ, preal) ≤ 12 for any target distribution preal (since µ(d) = 12 for all d ∈ D). Therefore D is ( = 1 2 )-Sequential–Foolable with round complexity T ( 12 ) = 1. On the other hand, D has an infinite Littlestone dimension and therefore is not Sequential–Foolable. Sequential-SDGs versus DP-SDGs So far we have introduced and characterized two formal setups for synthetic data generation. It is therefore natural to compare and seek connections between these two frameworks. We first note that the DP setting may only be more restrictive than the Sequential setting: Corollary 3 (DP–Foolability implies Sequential–Foolability). Let D be a class that is DP–Foolable. Then D has finite Littlestone dimension and in particular is Sequential–Foolable. Corollary 3 follows from Theorem 1: indeed, the latter yields that DP–Foolability is equivalent to Private agnostic proper -PAC learnability (PAP-PAC), and by [1] PAP-PAC learnability implies a finite Littlestone dimension which by Corollary 2 implies Sequential–Foolability. Towards a converse of Corollary 3. By the above it follows that the family of classes D that can be fooled by a DP algorithm is contained in the family of all Sequential–Foolable classes; specifically, those which admit a Sequential-SDG with a differentially private discriminator. We do not know whether the converse holds; i.e. whether “Sequential–Foolability =⇒ DP– Foolability”. Nevertheless, the implication “PAP-PAC learnability =⇒ DP–Foolability” (Theorem 1) can be regarded as an intermediate step towards this converse. Indeed, as discussed above, PAP-PAC learnablity implies Sequential–Foolablility. It is therefore natural to consider the following question, which is equivalent2 to the converse of Corollary 3: Question 1. Let D be a class that has finite Littlestone dimension. Is D properly and privately learnable in the agnostic PAC setting? A weaker form of this question – Whether every Littlestone class is privately PAC Learnable? – was posed by [1] as an open question (and was recently resolved in [11]). 5 Discussion In this work we develop a theory for two types of constrained-SDG, sequential and private. Let us now discuss SDGs more generally, and we broadly want to consider algorithms that observe data, sampled from some real-life distribution, and in turn generate new synthetic examples that resemble real-life samples, without any a-priori constraints. For example, consider an algorithm that receives as input some tunes from a specific music genre (e.g. jazz, rock, pop) and then outputs a new tune. Recently, there has been a remarkable breakthrough in the the construction of such SDGs with the introduction of the algorithmic frameworks of Generative Adversarial Networks (GANs) [18, 17], as well as Variational AutoEncoders (VAE) [26, 31]. In turn, the use of SDGs has seen many potential applications [24, 30, 38]. Here we follow a common interpretation of SDGs as IPM minimizers [2, 4]. However, it was also observed [2, 3] that there is a critical gap between the task of generating new synthetic data (such as new tunes) and the IPM minimization problem: In detail, Observation 1 shows that the IPM framework allows certain “bad" solutions that memorize. Specifically, let S be a sufficiently large independent sample from the target distribution and consider the empirical distribution as a candidate solution to the IPM minimization problem. Then, with high probability, the IPM distance between the empirical and the target distribution vanishes as |S| grows. To illustrate the problem, imagine that our goal is to generate new jazz tunes. Let us consider the discriminating class of all human music experts. The solution suggested above uses the empirical 2I.e. an affirmative answer to Question 1 is equivalent to the converse of Corollary 3. distribution and simply “generates" a tune from the training set3. This clearly misses the goal of generating new and original tunes but the IPM distance minimization framework does not discard this solution. For this reason we often invoke further restrictions on the SDG and consider constrainedSDGs. For example, [4] suggests to restrict the class of possible outputs psyn and shows that, under certain assumptions on the distribution preal, the right choice of class D leads to learning the true underlying distribution (in Wasserstein distance). In this work we explored two other types of constrained-SDGs, DP–SDGs and Sequential–SDGs, and we characterized the foolable classes in a distribution independent model, i.e. without making assumptions on the distribution preal. One motivation for studying these models, as well as the interest in a distribution independent setting, is the following underlying question: The output of Synthetic Data Generators should be new examples. But in what sense we require the output to be novel or distinct from the training set? How and in what sense we should avoid copying the training data or even outputting a memorized version of it? To answer such questions is of practical importance. For example, consider a company that wishes to automatically generate music or images to be used commercially. One approach could be to train an SDG, and then sell the generated output. What can we say about the output of SDGs in this context? Are the images generated by the SDG original? Are they copying the data? or breaching copyright? In this context, the differentially private setup comes with a very attractive interpretation that provides further motivation to study DP-SDGs, beyond preserving privacy of the dataset. To illustrate our interpretation of differential privacy as a criterion for originality consider the following situation: imagine that Lisa is a learning painter. She has learned to paint by observing samples of painting, produced by a mentor painter Mona. After a learning process, she draws a new painting L. Mona agrees that this new painting is a valid work of art, but Mona claims the result is not an original painting but a mere copy of a painting, say M , produced by Mona. How can Lisa argue that paint L is not a plagiary? The easiest argument would be that she had never observed M . However, this line of defence is not always realistic as she must observe some paintings. Instead, we will argue using the following thought experiment: What if Lisa never observed M? Might she still create L? If we could prove that this is the case, then one could argue similarly that L is not a palgiary. The last argument is captured by the notion of differential privacy. In a nutshell, a randomized algorithm that receives a sequence of data points x̄ as input is differentially private if removing/replacing a single data point in its input, does not affect its output y by much; more accurately, for any event E over the output y that has non-negligible probability on input x̄, then the probability remains non-negligible even after modifying one data point in x̄. The sequential setting also comes with an appealing interpretation in this context. A remarkable property of existing SDGs (e.g. GANs), that potentially reduces the likeliness of memorization, is that the generator’s access to the sample is masked. In more detail, the generator only has restricted access to the training set via feedback from a discriminator that observes real data vs. synthetic data. Thus, potentially, the generator may avoid degenerate solutions that memorize. Nevertheless, even though the generator is not given a direct access to the training data, it could still be that information about this data could "leak" through the feedback it receives from the discriminator. This raises the question of whether Sequential–Foolability can provide guarantees against memorization, and perhaps more importantly, in what sense? To start answering this question part of this work aims to understand the interconnection between the task of Sequential-Fooling and the task of DP–Fooling. Finally, the above questions also motivate our interest in a distribution-independent setting, that avoids assumptions on the distribution preal which we often don’t know. In detail, if we only cared about the resemblence between preal and psyn then we may be content with any algorithm that performs well in practice regardless of whether certain assumptions that we made in the analysis hold or not. But, if we care to obtain guarantees against copying or memorizing, then these should principally hold. And thus we should prefer to obtain our guarantees without too strong assumptions on the distribution preal. 3There are at most 7 · 109 music experts in the world. Hence, by standard concentration inequalities a sample of size roughly 9 2 log 10 suffices to achieve IPM distance at most with high probability. Acknowledgments and Disclosure of Funding R.L is supported by an ISF grant no. 2188/20 and partially funded by an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google. S.M is supported by the Israel Science Foundation (grant No. 1225/20), by an Azrieli Faculty Fellowship, and by a grant from the United States - Israel Binational Science Foundation (BSF). Part of this work was done while the author was at Google Research. Broader Impact There are no foreseen ethical or societal consequences for the research presented herein.
1. What is the main contribution of the paper regarding differentially private synthetic data generation? 2. What are the strengths of the paper, particularly in its attempt to provide a characterization of classes that admit a private SDG? 3. Do you have any concerns about the paper's weaknesses, such as the lack of consideration for computational or communication efficiency?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper studies differentially private synthetic data generation with respect to a (possibly infinite) class of queries D and shows that if D is privately and properly PAC learnable, then it admits a private Synthetic Data Generator (SDG). More generally, the paper establishes an equivalence between the statements: 1. D is privately and properly PAC learnable, 2. D admits a private SGD, and 3. D is sanitizable. The proof introduces, as an intermediate step, sequential SDGs, and shows an equivalence between classes that admit sequential SDGs and classes with finite Littlestone dimension. Strengths This paper studies differentially private SDG, which is an area of interest to the NeurIPS community. -It attempts to provide a characterization of the classes that admit a private SDG, via private learnability, finite Littlestone dimension, and the notion of sequential SDGs. The latter is introduced in this paper and seems interesting in its own right. I think its importance stems from the fact that it connects and provides equivalences among many interesting established notions. -Previous work on private SDGs requires that the size of the original dataset depend logarithmically on the size of the class. This work gives sample complexity bounds for private SDGs for *infinite* classes, which depends on the Littlestone dimension of the class. -Although it might seem that in view of recent results, the connection of private SDGs and sequential SDGs/online learnability/Littlestone dimension is natural, the proof is fairly intricate and combines several ideas from these areas. Weaknesses -There is no consideration for computational or communication efficiency in the constructions for a private/sequential SDG. The generator and discriminators are assumed to be omnipotent in terms of computational power. So at this point the paper serves more as a proof-of-concept and not as a practical construction.
NIPS
Title Synthetic Data Generators -- Sequential and Private Abstract We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps nonefficient). Previous work on synthetic data generators focused on the case that the query class D is finite and obtained sample complexity bounds that scale logarithmically with the size |D|. Here we construct a private synthetic data generator whose sample complexity is independent of the domain size, and we replace finiteness with the assumption that D is privately PAC learnable (a formally weaker task, hence we obtain equivalence between the two tasks). 1 Introduction Generating differentially–private synthetic data [9, 14] is a fundamental task in learning that has won considerable attention in the last few years [22, 34, 23, 16]. Formally, given a class D of distinguishing functions, a fooling algorithm receives as input IID samples from an unknown real-life distribution, preal, and outputs a distribution psyn that is -close to preal w.r.t the Integral Probability Metric ([29]), denoted IPMD: IPMD(p, q) = sup d∈D ∣∣∣∣ Ex∼p[d(x)]− Ex∼q[d(x)] ∣∣∣∣ (1) A DP-SDG is then simply defined to be a differentially private fooling algorithm. A fundamental question is then: Which classes D can be privately fooled? In this paper, we focus on sample complexity bounds and give a first such characterization. We prove that a class D is DP–foolable if and only if it is privately (proper) PAC learnable. As a corollary, we obtain equivalence between several important tasks within private learning such as proper PAC Learning [25], Data Release [14], Sanitization [6] and what we will term here Private Uniform Convergence. Much focus has been given to the task of synthetic data generation. Also, several papers [5, 23, 16, 20, 21] discuss the reduction of private fooling to private PAC learning. In contrast with previous work, we assume an arbitrary large domain. In detail, previous existing bounds normally scale logarithmically with the size of the query class D (or alternatively, depend on the size of the domain). Here we initiate a study of the sample complexity that does not assume that the size of the domain is fixed. Instead, we only assume that the class is privately PAC learnable, and obtain sample complexity 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. bounds that are independent of the cardinality |D|. We note that the existence of a private synthetic data generator entails private proper PAC learning, hence our assumption is a necessary condition for the existence of a DP-SDG. The general approach taken for generating synthetic data (which we also follow here) is to exploit an online setup of a sequential game between a generator that aims to fool a discriminator and a discriminator that attempts to distinguish between real and fake data. The utility and generality of this technical method, in the context of privacy, has been observed in several previous works [22, 32, 20]. However, in the finite case, specific on-line algorithms, such as Multiplicative Weights [20] and Follow-the-Perturbed-Leader [37] are considered. The algorithms are then exploited, in a white-box fashion, that allow easy construction of SDGs. The technical challenge we face in this work is to generalize the above technique in order to allow the use of no-regret algorithms that work over infinite classes. Such algorithms don’t necessarily share the attractive traits of MW and FtPL that allow their exploitation for generating synthetic data. To overcome this, we study here a general framework of sequential SDGs and show how an arbitrary online algorithm can be turned, via a Black-box process, into an SDG which in turn can be privatized. We discuss these challenges in more detail in the full version [10]. Thus, the technical workhorse behind our proof is a learning primitive which is of interest of its own right. We term it here Sequential Synthetic Data Generator (Sequential-SDG). Similar frameworks appeared [20, 37] in the context of private-SDGs but also more broadly in the context of generative learning [19, 27, 18, 17]. We further discuss this deep and important connection between private learning and generative learning in Section 5 In the sequential-SDG setting, we consider a sequential game between a generator (player G) and a discriminator (player D). At every iteration, player G proposes a distribution and player D outputs a discriminating function from a prespecified binary class D. The game stops when player G proposes a distribution that is close in IPMD distance to the true target distribution. As we focus on the statistical limits of the model, we ignore the optimization and computational complexity aspects and we assume that both players are omnipotent in terms of their computational power. We provide here characterization of the classes that can be sequentially fooled (i.e. classes D for which we can construct a sequential SDG) and show that the sequentially foolable classes are exactly Littlestone classes [28, 7]. In turn, we harness sequential SDGs to generate synthetic data together with a private discriminator in order to generate private synthetic data. Because this framework assumes only a private learner, we in some sense show that the sequential setting is a canonical method to generate synthetic data. To summarize this work contains several contributions: We provide the first domain-size independent sample complexity bounds for DP-Fooling, and show an equivalence between private synthetic data generation and private learning. Second, we introduce and characterize a new class of SDGs and demonstrate their utility in the construction of private synthetic data. 2 Prelimineries In this section we recall standard definitions and notions in differential privacy and learning (a more extensive background is also given in the full version [10]). Throughout the paper we will study classes D of boolean functions defined on a domain X . However, we will often use a dual point of view where we think of X as the class of functions and on D as the domain. Therefore, in order to avoid confusion, in this section we let W denote the domain and H ⊆ {0, 1}W to denote the functions class. 2.1 Differential Privacy and Private Learning Differential Privacy [13, 12] is a statistical formalism which aims at capturing algorithmic privacy. It concerns with problems whose input contains databases with private records and it enables to design algorithms that are formally guaranteed to protect the private information. For more background see the surveys [15, 35]. The formal definition is as follows: letWm denote the input space. An input instance Ω ∈ Wm is called a database, and two databases Ω′,Ω′′ ∈ Wm are called neighbours if there exists a single i ≤ m such that Ω′i 6= Ω′′i . Let α, β > 0 be the privacy parameters, a randomized algorithm M : Wm → Σ is called (α, β)-differentially private if for every two neighbouring Ω′,Ω′′ ∈ Wm and for every event E ⊆ Σ: Pr [ M(Ω′) ∈ E ] ≤ eα Pr [ M(Ω′′) ∈ E ] + β. An algorithm M : ∪∞m=1Wm → Y is called differentially private if for every m its restriction toWm is (α(m), β(m))-differentially private, where α(m) = O(1) and β(m) is negligible1. Concretely, we will think of α(m) as a small constant (say, 0.1) and β(m) = O(m− logm). Private Learning. We next overview the notion of Differentially private learning algorithms [25]. In this context the input database is the training set of the algorithm. Given a hypothesis classH over a domain W , we say thatH ⊆ {0, 1}W is privately PAC learnable if it can be learned by a differentially private algorithm. That is, if there is a differentially private algorithm M and a sample complexity bound m( , δ) = poly(1/ , 1/δ) such that for every , δ > 0 and every distribution P over W × {0, 1}, if M receives an independent sample S ∼ Pm then it outputs an hypothesis hS such that with probability at least 1− δ: LP(hS) ≤ min h∈H LP(h) + , where LP(h) = E(w,y)∼P [ 1[h(w) 6= y] ] . If M is proper, namely hS ∈ H for every input sample S, thenH is said to be Privately Agnostically and Properly PAC learnable (PAP-PAC-learnable). In some of our proofs it will be convenient to consider private learning algorithms whose privacy parameter α satisfies α ≤ 1 (rather than α = O(1) as in the definition of private algorithms). This can be done without loss of generality due to privacy amplification theorems (see, for example (similar, for example [35] (Definition 8.2) and references within (see also the full version [10] for further details). Sanitization. The notion of sanitization has been introduced by (author?) [9] and further studied in [6]. LetH ⊆ {0, 1}W be a class of functions. An ( , δ, α, β,m)-sanitizer forH is an (α, β)-private algorithm M that receives as an input a sample S ∈ Wm and outputs a function Est : H → [0, 1] such that with probability at least 1− δ, (∀h ∈ H) : ∣∣∣Est(h)− |{w ∈ S : h(w) = 1}||S| ∣∣∣ ≤ . We say that H is sanitizable if there exists an algorithm M and a bound m( , δ) = poly(1/ , 1/δ) such that for every , δ > 0, the restriction of M to samples of any size m ≥ m( , δ) is an ( , δ, α, β,m)-sanitizer forH with α = α(m) = O(1) and β = β(m) negligible. Private Uniform Convergence. A basic concept in Statistical Learning Theory is the notion of uniform convergence. In a nutshell, a class of hypothesesH satisfies the uniform convergence property if for any unknown distribution P over examples, one can uniformly estimate the expected losses of all hypotheses in H given a large enough sample from P . Uniform convergence and statistical learning are closely related. For example, the Fundamental Theorem of PAC Learning asserts that they are equivalent for binary-classification [33]. This notion extends to the setting of private learning: a class H satisfies the Private Uniform Convergence property if there exists a differentially private algorithm M and a sample complexity bound m( , δ) = poly(1/ , 1/δ) such that for every distribution P overW × {0, 1} the following holds: if M is given an input sample S of size at least m( , δ) which is drawn independently from P, then it outputs an estimator L̂ : H → [0, 1] such that with probability at least (1− δ) it holds that (∀h ∈ H) : ∣∣L̂(h)− LP(h)∣∣ ≤ . Note that without the privacy restriction, the estimator L̂(h) = LS(h) := |{(wi, yi) ∈ S : h(wi) 6= yi}| |S| satisfies the requirement for m = Õ(d/ 2), where d is the VC-dimension ofH; this follows by the celebrated VC-Theorem [36, 33]. 1I.e. β(m) = o(m−k) for every k > 0. 3 Problem Setup We assume a domain X and we let D ⊆ {0, 1}X be a class of functions over X . The class D is referred to as the discriminating functions class and its members d ∈ D are called discriminating functions or distinguishers. We let ∆(X ) denote the space of distributions over X . Given two distributions p, q ∈ ∆(X ), let IPMD(p, q) denote the IPM distance between p and q as in Eq. (1). It will be convenient to assume thatD is symmetric, i.e. that whenever d ∈ D then also its complement, 1− d ∈ D. Assuming that D is symmetric will not lose generality and will help simplify notations. We will also use the following shorthand: given a distribution p and a distinguisher d we will often write p(d) := E x∼p [d(x)]. Under this assumption and notation we can remove the absolute value from the definition of IPM: IPMD(p, q) = sup d∈D (p(d)− q(d)) . (2) 3.1 Synthetic Data Generators A synthetic data generator (SDG), without additional constraints, is defined as follows Definition 1 (SDG). An SDG, or a fooling algorithm, for D with sample complexity m( , δ) is an algorithm M that receives as input a sample S of points from X and parameters , δ such that the following holds: for every , δ > 0 and every target distribution preal, if S is an independent sample of size at least m( , δ) from preal then Pr [ IPMD(psyn, preal) < ] ≥ 1− δ, where psyn := M(S) is the distribution outputted by M , and the probability is taken over S ∼ (preal) m as well as over the randomness of M . We will say that a class is foolable if it can be fooled by an SDG algorithm whose sample complexity is poly(1 , 1 δ ). Foolability, without further constraints, comes with the following characterization which is an immediate corollary (or rather a reformulation) of the celebrated VC Theorem ([36]). Denote by Memp an algorithm that receives a sample S and returns Memp(S) := pS , the empirical distribution over S. Observation 1 ([36]). The following statements are equivalent for a class D ⊆ {0, 1}X : 1. D is PAC–learnable. 2. D is foolable. 3. D satisfies the uniform convergence property. 4. D has a finite VC-dimension. 5. Memp is a fooling algorithm for D with sample complexity m = O( log 1/δ 2 ). Observation 1 shows that foolability is equivalent to PAC-learnability (and in turn to finite VC dimension). We will later see analogous results for DP–Foolability (which is equivalent to differentially private PAC learnability) and Sequential–Foolability (which is equivalent to online learnability). We now discuss the two fundamental models that are the focus of this work – DP–Foolability and Sequential–Foolability. 3.2 DP–Synthetic Data Generators We next introduce the notion of a DP–synthetic data generator and DP–Foolability. As discussed, DP-SDGs have been the focus of study of several papers [9, 14, 22, 34, 23, 16]. Definition 2 (DP-SDG). A DP-SDG, or a DP-fooling algorithm M for a class D is an algorithm that receives as an input a finite sample S and two parameters ( , δ) and satisfies: • Differential Privacy. For every m, the restriction of M to input samples S of size m is (α(m), β(m))-differentially private, where α(m) = O(1) and β(m) is negligible. • Fooling. M fools D: there exists a sample complexity bound m = m( , δ) such that for every target distribution preal if S is a sample of at least m examples from preal then IPMD(psyn, preal) ≤ with probability at least 1− δ, where psyn is the output of M on the input sample S. We will say in short that a class D is DP– Foolable if there exists a DP-SDG for the class D with sample complexity m = poly(1/ , 1/δ). 3.3 Sequential–Synthetic Data Generators We now describe the second model of foolability which, as discussed, is the technical engine behind our proof of equivalence between DP-foolability and DP-learning. Sequential-SDGs A Sequential-SDG can be thought of as a sequential game between two players called the generator (denoted by G) and the discriminator (denoted by D). At the beginning of the game, the discriminator D receives the target distribution which is denoted by preal. The goal of the generator G is to find a distribution p such that p and preal are -indistinguishable with respect to some prespecified discriminating class D and an error parameter > 0, i.e. IPMD(p, preal) ≤ . We note that both players know D and . The game proceeds in rounds, where in each round t the generator G submits to the discriminator a candidate distribution pt and the discriminator replies according to the following rule: if IPMD(pt, preal) ≤ then the discriminator replies “WIN” and the game terminates. Else, the discriminator picks dt ∈ D such that |preal(dt)− pt(dt)| > , and sends dt to the generator along with a bit which indicates whether pt(dt) > preal(dt) or pt(dt) < preal(dt). Equivalently, instead of transmitting an extra bit, we assume that the discriminator always sends dt ∈ D ∪ (1−D) s.t. preal(dt)− pt(dt) > . (3) Definition 3 (Sequential–Foolability). Let > 0 and let D be a discriminating class. 1. D is called -Sequential–Foolable if there exists a generator G and a bound T = T ( ) such that G wins any discriminator D with any target distribution preal after at most T rounds. 2. The round complexity of Sequential–Fooling D is defined as the minimal upper bound T ( ) on the number of rounds that suffice to –Fool D. 3. D is called Sequential–Foolable if it is -Sequential foolable for every > 0 with T ( ) = poly(1/ ). In the next section we will see that if D is -Sequential–Foolabe for some fixed < 1/2 then it is Sequential–Foolable with round complexity T ( ) = O(1/ 2). 4 Results Our main result characterizes DP–Foolability in terms of basic notions from differential privacy and PAC learning. Theorem 1 (Characterization of DP–Fooling). The following statements are equivalent for a class D ⊆ {0, 1}X : 1. D is privately and properly learnable in the agnostic PAC setting. 2. D is DP–Foolable. 3. D is sanitizable. 4. D satisfies the private uniform convergence property. Theorem 1 shows a qualitative equivalence between the relevant four notions, quantitative bounds on the entailed sample complexity are provided in the full version [10]. The implication Item 3 =⇒ Item 1 was known prior to this work and was proven in [6] (albeit the pure case). The equivalence among Items 2 to 4 is natural and expected. Indeed, each of them expresses the existence of a private algorithm that publishes, privately, certain estimates of all functions in D. The fact that Item 1 implies the other three items is perhaps more surprising, and the main contribution of this work, and we show that Item 1 implies Item 2. Our proof of that exploits the Sequential framework. In a nutshell, we observe that a class that is both sequentially foolable and privately pac learnable is also DP-foolable: this result follows by constructing a sequential SDG that with a private discriminator, that is assumed to exists, combined with standard compositional and preprocessing arguments regarding the privacy of the generators output. Thus to prove the implication we only need to show that private PAC learning implies sequential foolability. This result follows from Corollary 2 that provides characterization of sequential foolable classes as well as a recent result by (author?) [1] that shows that private PAC learnable classes have finite Littlestone dimension. See the full version [10] for a complete proof. Private learnability versus private uniform convergence. The equivalence Item 1 ⇐⇒ Item 4 is between private learning and private uniform convergence. The non-private analogue of this equivalence is a cornerstone in statistical learning; it reduces the statistical challenge of minimizing an unknown population loss to an optimization problem of minimizing a known empirical estimate. In particular, it yields the celebrated Empirical Risk Minimization (ERM) principle: “Output h ∈ H that minimizes the empirical loss”. We therefore highlight this equivalence in the following corollary: Corollary 1 (Private proper learning = private uniform convergence). Let H ⊆ {0, 1}X . Then H is privately and properly PAC learnable if and only ifH satisfies the private uniform convergence property. Sequential–SDGs We next describe our characterization of Sequential-SDGs. As discussed, this characterization is the technical heart behind the equivalence between private PAC learning and DP-foolability. Nevertheless we believe that it may be of interest of its own right. We thus provide quantitative upper and lower bounds on the round complexity of Sequential-SDGs in terms of the Littlestone dimension (see [7] or the full version [10] for the exact definition). Theorem 2 (Quantitative round-complexity bounds). Let D be a discriminating class with dual Littlestone dimension `∗ and let T ( ) denote the round complexity of Sequential–Fooling D. Then, 1. T ( ) = O ( `∗ 2 log `∗ ) for every . 2. T ( ) ≥ ` ∗ 2 for every < 1 2 . It would be interesting to close the gap between the two bounds in terms of > 0, and we leave it for future work. To prove Item 1 we construct a generator with winning strategy which we outline in the full version [10]. A complete proof of Theorem 2 appears in the full version [10]. As a corollary we get the following characterization of Sequential–Foolability: Corollary 2 (Characterization of Sequential–Foolability). The following are equivalent for D ⊆ {0, 1}X : 1. D is Sequential–Foolable. 2. D is -Sequential–Foolable for some < 1/2. 3. D has a finite dual Littlestone dimension. 4. D has a finite Littlestone dimension. Corollary 2 follows directly from Theorem 2 (which gives the equivalences 1 ⇐⇒ 2 ⇐⇒ 3) and from [8] (which gives the equivalence 3 ⇐⇒ 4, see the full version [10] for further detail). Tightness of = 12 . The implication Item 2 =⇒ Item 1 can be seen as a boosting result: i.e. “weak” foolability for some fixed < 1/2 implies “strong” foolability for every . The following example demonstrates that the dependence on in Item 2 can not be improved beyond 12 : let X be the unit circle in R2, and let D consist of all arcs whose length is exactly half of the circumference. It is easy to verify that the uniform distribution µ over X satisfies IPMD(µ, preal) ≤ 12 for any target distribution preal (since µ(d) = 12 for all d ∈ D). Therefore D is ( = 1 2 )-Sequential–Foolable with round complexity T ( 12 ) = 1. On the other hand, D has an infinite Littlestone dimension and therefore is not Sequential–Foolable. Sequential-SDGs versus DP-SDGs So far we have introduced and characterized two formal setups for synthetic data generation. It is therefore natural to compare and seek connections between these two frameworks. We first note that the DP setting may only be more restrictive than the Sequential setting: Corollary 3 (DP–Foolability implies Sequential–Foolability). Let D be a class that is DP–Foolable. Then D has finite Littlestone dimension and in particular is Sequential–Foolable. Corollary 3 follows from Theorem 1: indeed, the latter yields that DP–Foolability is equivalent to Private agnostic proper -PAC learnability (PAP-PAC), and by [1] PAP-PAC learnability implies a finite Littlestone dimension which by Corollary 2 implies Sequential–Foolability. Towards a converse of Corollary 3. By the above it follows that the family of classes D that can be fooled by a DP algorithm is contained in the family of all Sequential–Foolable classes; specifically, those which admit a Sequential-SDG with a differentially private discriminator. We do not know whether the converse holds; i.e. whether “Sequential–Foolability =⇒ DP– Foolability”. Nevertheless, the implication “PAP-PAC learnability =⇒ DP–Foolability” (Theorem 1) can be regarded as an intermediate step towards this converse. Indeed, as discussed above, PAP-PAC learnablity implies Sequential–Foolablility. It is therefore natural to consider the following question, which is equivalent2 to the converse of Corollary 3: Question 1. Let D be a class that has finite Littlestone dimension. Is D properly and privately learnable in the agnostic PAC setting? A weaker form of this question – Whether every Littlestone class is privately PAC Learnable? – was posed by [1] as an open question (and was recently resolved in [11]). 5 Discussion In this work we develop a theory for two types of constrained-SDG, sequential and private. Let us now discuss SDGs more generally, and we broadly want to consider algorithms that observe data, sampled from some real-life distribution, and in turn generate new synthetic examples that resemble real-life samples, without any a-priori constraints. For example, consider an algorithm that receives as input some tunes from a specific music genre (e.g. jazz, rock, pop) and then outputs a new tune. Recently, there has been a remarkable breakthrough in the the construction of such SDGs with the introduction of the algorithmic frameworks of Generative Adversarial Networks (GANs) [18, 17], as well as Variational AutoEncoders (VAE) [26, 31]. In turn, the use of SDGs has seen many potential applications [24, 30, 38]. Here we follow a common interpretation of SDGs as IPM minimizers [2, 4]. However, it was also observed [2, 3] that there is a critical gap between the task of generating new synthetic data (such as new tunes) and the IPM minimization problem: In detail, Observation 1 shows that the IPM framework allows certain “bad" solutions that memorize. Specifically, let S be a sufficiently large independent sample from the target distribution and consider the empirical distribution as a candidate solution to the IPM minimization problem. Then, with high probability, the IPM distance between the empirical and the target distribution vanishes as |S| grows. To illustrate the problem, imagine that our goal is to generate new jazz tunes. Let us consider the discriminating class of all human music experts. The solution suggested above uses the empirical 2I.e. an affirmative answer to Question 1 is equivalent to the converse of Corollary 3. distribution and simply “generates" a tune from the training set3. This clearly misses the goal of generating new and original tunes but the IPM distance minimization framework does not discard this solution. For this reason we often invoke further restrictions on the SDG and consider constrainedSDGs. For example, [4] suggests to restrict the class of possible outputs psyn and shows that, under certain assumptions on the distribution preal, the right choice of class D leads to learning the true underlying distribution (in Wasserstein distance). In this work we explored two other types of constrained-SDGs, DP–SDGs and Sequential–SDGs, and we characterized the foolable classes in a distribution independent model, i.e. without making assumptions on the distribution preal. One motivation for studying these models, as well as the interest in a distribution independent setting, is the following underlying question: The output of Synthetic Data Generators should be new examples. But in what sense we require the output to be novel or distinct from the training set? How and in what sense we should avoid copying the training data or even outputting a memorized version of it? To answer such questions is of practical importance. For example, consider a company that wishes to automatically generate music or images to be used commercially. One approach could be to train an SDG, and then sell the generated output. What can we say about the output of SDGs in this context? Are the images generated by the SDG original? Are they copying the data? or breaching copyright? In this context, the differentially private setup comes with a very attractive interpretation that provides further motivation to study DP-SDGs, beyond preserving privacy of the dataset. To illustrate our interpretation of differential privacy as a criterion for originality consider the following situation: imagine that Lisa is a learning painter. She has learned to paint by observing samples of painting, produced by a mentor painter Mona. After a learning process, she draws a new painting L. Mona agrees that this new painting is a valid work of art, but Mona claims the result is not an original painting but a mere copy of a painting, say M , produced by Mona. How can Lisa argue that paint L is not a plagiary? The easiest argument would be that she had never observed M . However, this line of defence is not always realistic as she must observe some paintings. Instead, we will argue using the following thought experiment: What if Lisa never observed M? Might she still create L? If we could prove that this is the case, then one could argue similarly that L is not a palgiary. The last argument is captured by the notion of differential privacy. In a nutshell, a randomized algorithm that receives a sequence of data points x̄ as input is differentially private if removing/replacing a single data point in its input, does not affect its output y by much; more accurately, for any event E over the output y that has non-negligible probability on input x̄, then the probability remains non-negligible even after modifying one data point in x̄. The sequential setting also comes with an appealing interpretation in this context. A remarkable property of existing SDGs (e.g. GANs), that potentially reduces the likeliness of memorization, is that the generator’s access to the sample is masked. In more detail, the generator only has restricted access to the training set via feedback from a discriminator that observes real data vs. synthetic data. Thus, potentially, the generator may avoid degenerate solutions that memorize. Nevertheless, even though the generator is not given a direct access to the training data, it could still be that information about this data could "leak" through the feedback it receives from the discriminator. This raises the question of whether Sequential–Foolability can provide guarantees against memorization, and perhaps more importantly, in what sense? To start answering this question part of this work aims to understand the interconnection between the task of Sequential-Fooling and the task of DP–Fooling. Finally, the above questions also motivate our interest in a distribution-independent setting, that avoids assumptions on the distribution preal which we often don’t know. In detail, if we only cared about the resemblence between preal and psyn then we may be content with any algorithm that performs well in practice regardless of whether certain assumptions that we made in the analysis hold or not. But, if we care to obtain guarantees against copying or memorizing, then these should principally hold. And thus we should prefer to obtain our guarantees without too strong assumptions on the distribution preal. 3There are at most 7 · 109 music experts in the world. Hence, by standard concentration inequalities a sample of size roughly 9 2 log 10 suffices to achieve IPM distance at most with high probability. Acknowledgments and Disclosure of Funding R.L is supported by an ISF grant no. 2188/20 and partially funded by an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google. S.M is supported by the Israel Science Foundation (grant No. 1225/20), by an Azrieli Faculty Fellowship, and by a grant from the United States - Israel Binational Science Foundation (BSF). Part of this work was done while the author was at Google Research. Broader Impact There are no foreseen ethical or societal consequences for the research presented herein.
1. What is the primary contribution of the paper regarding private-proper-agnostic learnability? 2. Can you elaborate on the concept of sequential-SDG and its significance? 3. What are the round complexity upper and lower bounds for sequential-SDG in terms of dual-Littlestone dimension? 4. How do the bounds provide characterization to classes that obtain a sequential-SDG as Littlestone classes? 5. Are there any concerns or limitations regarding the paper's focus on private-fool-ability and sanitize-ability?
Summary and Contributions Strengths Weaknesses
Summary and Contributions - Private-proper-agnostic learnability implies private-fool-ability (and hence is equivlant to it and to sanitize-ability). - Definition of sequential-SDG. - Round complexity upper and lower bounds for sequential-SDG in term of dual-Littlestone dimension. The bounds gives characterization to classes which obtain a sequential-SDG as Littlestone lasses. Strengths The paper is well written. The definition of sequential-S looks like an interesting online-variant to the known SDG problem. The results are solid and adds a nice level on top of the recent results in the field. Weaknesses Most of the equivalences are known (the authors themselves note that). Edit: the authors addressed this point in their feedback and as I said also pointed out to the novel part and the equivalences not previously known. My score stays the same.
NIPS
Title Synthetic Data Generators -- Sequential and Private Abstract We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps nonefficient). Previous work on synthetic data generators focused on the case that the query class D is finite and obtained sample complexity bounds that scale logarithmically with the size |D|. Here we construct a private synthetic data generator whose sample complexity is independent of the domain size, and we replace finiteness with the assumption that D is privately PAC learnable (a formally weaker task, hence we obtain equivalence between the two tasks). 1 Introduction Generating differentially–private synthetic data [9, 14] is a fundamental task in learning that has won considerable attention in the last few years [22, 34, 23, 16]. Formally, given a class D of distinguishing functions, a fooling algorithm receives as input IID samples from an unknown real-life distribution, preal, and outputs a distribution psyn that is -close to preal w.r.t the Integral Probability Metric ([29]), denoted IPMD: IPMD(p, q) = sup d∈D ∣∣∣∣ Ex∼p[d(x)]− Ex∼q[d(x)] ∣∣∣∣ (1) A DP-SDG is then simply defined to be a differentially private fooling algorithm. A fundamental question is then: Which classes D can be privately fooled? In this paper, we focus on sample complexity bounds and give a first such characterization. We prove that a class D is DP–foolable if and only if it is privately (proper) PAC learnable. As a corollary, we obtain equivalence between several important tasks within private learning such as proper PAC Learning [25], Data Release [14], Sanitization [6] and what we will term here Private Uniform Convergence. Much focus has been given to the task of synthetic data generation. Also, several papers [5, 23, 16, 20, 21] discuss the reduction of private fooling to private PAC learning. In contrast with previous work, we assume an arbitrary large domain. In detail, previous existing bounds normally scale logarithmically with the size of the query class D (or alternatively, depend on the size of the domain). Here we initiate a study of the sample complexity that does not assume that the size of the domain is fixed. Instead, we only assume that the class is privately PAC learnable, and obtain sample complexity 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. bounds that are independent of the cardinality |D|. We note that the existence of a private synthetic data generator entails private proper PAC learning, hence our assumption is a necessary condition for the existence of a DP-SDG. The general approach taken for generating synthetic data (which we also follow here) is to exploit an online setup of a sequential game between a generator that aims to fool a discriminator and a discriminator that attempts to distinguish between real and fake data. The utility and generality of this technical method, in the context of privacy, has been observed in several previous works [22, 32, 20]. However, in the finite case, specific on-line algorithms, such as Multiplicative Weights [20] and Follow-the-Perturbed-Leader [37] are considered. The algorithms are then exploited, in a white-box fashion, that allow easy construction of SDGs. The technical challenge we face in this work is to generalize the above technique in order to allow the use of no-regret algorithms that work over infinite classes. Such algorithms don’t necessarily share the attractive traits of MW and FtPL that allow their exploitation for generating synthetic data. To overcome this, we study here a general framework of sequential SDGs and show how an arbitrary online algorithm can be turned, via a Black-box process, into an SDG which in turn can be privatized. We discuss these challenges in more detail in the full version [10]. Thus, the technical workhorse behind our proof is a learning primitive which is of interest of its own right. We term it here Sequential Synthetic Data Generator (Sequential-SDG). Similar frameworks appeared [20, 37] in the context of private-SDGs but also more broadly in the context of generative learning [19, 27, 18, 17]. We further discuss this deep and important connection between private learning and generative learning in Section 5 In the sequential-SDG setting, we consider a sequential game between a generator (player G) and a discriminator (player D). At every iteration, player G proposes a distribution and player D outputs a discriminating function from a prespecified binary class D. The game stops when player G proposes a distribution that is close in IPMD distance to the true target distribution. As we focus on the statistical limits of the model, we ignore the optimization and computational complexity aspects and we assume that both players are omnipotent in terms of their computational power. We provide here characterization of the classes that can be sequentially fooled (i.e. classes D for which we can construct a sequential SDG) and show that the sequentially foolable classes are exactly Littlestone classes [28, 7]. In turn, we harness sequential SDGs to generate synthetic data together with a private discriminator in order to generate private synthetic data. Because this framework assumes only a private learner, we in some sense show that the sequential setting is a canonical method to generate synthetic data. To summarize this work contains several contributions: We provide the first domain-size independent sample complexity bounds for DP-Fooling, and show an equivalence between private synthetic data generation and private learning. Second, we introduce and characterize a new class of SDGs and demonstrate their utility in the construction of private synthetic data. 2 Prelimineries In this section we recall standard definitions and notions in differential privacy and learning (a more extensive background is also given in the full version [10]). Throughout the paper we will study classes D of boolean functions defined on a domain X . However, we will often use a dual point of view where we think of X as the class of functions and on D as the domain. Therefore, in order to avoid confusion, in this section we let W denote the domain and H ⊆ {0, 1}W to denote the functions class. 2.1 Differential Privacy and Private Learning Differential Privacy [13, 12] is a statistical formalism which aims at capturing algorithmic privacy. It concerns with problems whose input contains databases with private records and it enables to design algorithms that are formally guaranteed to protect the private information. For more background see the surveys [15, 35]. The formal definition is as follows: letWm denote the input space. An input instance Ω ∈ Wm is called a database, and two databases Ω′,Ω′′ ∈ Wm are called neighbours if there exists a single i ≤ m such that Ω′i 6= Ω′′i . Let α, β > 0 be the privacy parameters, a randomized algorithm M : Wm → Σ is called (α, β)-differentially private if for every two neighbouring Ω′,Ω′′ ∈ Wm and for every event E ⊆ Σ: Pr [ M(Ω′) ∈ E ] ≤ eα Pr [ M(Ω′′) ∈ E ] + β. An algorithm M : ∪∞m=1Wm → Y is called differentially private if for every m its restriction toWm is (α(m), β(m))-differentially private, where α(m) = O(1) and β(m) is negligible1. Concretely, we will think of α(m) as a small constant (say, 0.1) and β(m) = O(m− logm). Private Learning. We next overview the notion of Differentially private learning algorithms [25]. In this context the input database is the training set of the algorithm. Given a hypothesis classH over a domain W , we say thatH ⊆ {0, 1}W is privately PAC learnable if it can be learned by a differentially private algorithm. That is, if there is a differentially private algorithm M and a sample complexity bound m( , δ) = poly(1/ , 1/δ) such that for every , δ > 0 and every distribution P over W × {0, 1}, if M receives an independent sample S ∼ Pm then it outputs an hypothesis hS such that with probability at least 1− δ: LP(hS) ≤ min h∈H LP(h) + , where LP(h) = E(w,y)∼P [ 1[h(w) 6= y] ] . If M is proper, namely hS ∈ H for every input sample S, thenH is said to be Privately Agnostically and Properly PAC learnable (PAP-PAC-learnable). In some of our proofs it will be convenient to consider private learning algorithms whose privacy parameter α satisfies α ≤ 1 (rather than α = O(1) as in the definition of private algorithms). This can be done without loss of generality due to privacy amplification theorems (see, for example (similar, for example [35] (Definition 8.2) and references within (see also the full version [10] for further details). Sanitization. The notion of sanitization has been introduced by (author?) [9] and further studied in [6]. LetH ⊆ {0, 1}W be a class of functions. An ( , δ, α, β,m)-sanitizer forH is an (α, β)-private algorithm M that receives as an input a sample S ∈ Wm and outputs a function Est : H → [0, 1] such that with probability at least 1− δ, (∀h ∈ H) : ∣∣∣Est(h)− |{w ∈ S : h(w) = 1}||S| ∣∣∣ ≤ . We say that H is sanitizable if there exists an algorithm M and a bound m( , δ) = poly(1/ , 1/δ) such that for every , δ > 0, the restriction of M to samples of any size m ≥ m( , δ) is an ( , δ, α, β,m)-sanitizer forH with α = α(m) = O(1) and β = β(m) negligible. Private Uniform Convergence. A basic concept in Statistical Learning Theory is the notion of uniform convergence. In a nutshell, a class of hypothesesH satisfies the uniform convergence property if for any unknown distribution P over examples, one can uniformly estimate the expected losses of all hypotheses in H given a large enough sample from P . Uniform convergence and statistical learning are closely related. For example, the Fundamental Theorem of PAC Learning asserts that they are equivalent for binary-classification [33]. This notion extends to the setting of private learning: a class H satisfies the Private Uniform Convergence property if there exists a differentially private algorithm M and a sample complexity bound m( , δ) = poly(1/ , 1/δ) such that for every distribution P overW × {0, 1} the following holds: if M is given an input sample S of size at least m( , δ) which is drawn independently from P, then it outputs an estimator L̂ : H → [0, 1] such that with probability at least (1− δ) it holds that (∀h ∈ H) : ∣∣L̂(h)− LP(h)∣∣ ≤ . Note that without the privacy restriction, the estimator L̂(h) = LS(h) := |{(wi, yi) ∈ S : h(wi) 6= yi}| |S| satisfies the requirement for m = Õ(d/ 2), where d is the VC-dimension ofH; this follows by the celebrated VC-Theorem [36, 33]. 1I.e. β(m) = o(m−k) for every k > 0. 3 Problem Setup We assume a domain X and we let D ⊆ {0, 1}X be a class of functions over X . The class D is referred to as the discriminating functions class and its members d ∈ D are called discriminating functions or distinguishers. We let ∆(X ) denote the space of distributions over X . Given two distributions p, q ∈ ∆(X ), let IPMD(p, q) denote the IPM distance between p and q as in Eq. (1). It will be convenient to assume thatD is symmetric, i.e. that whenever d ∈ D then also its complement, 1− d ∈ D. Assuming that D is symmetric will not lose generality and will help simplify notations. We will also use the following shorthand: given a distribution p and a distinguisher d we will often write p(d) := E x∼p [d(x)]. Under this assumption and notation we can remove the absolute value from the definition of IPM: IPMD(p, q) = sup d∈D (p(d)− q(d)) . (2) 3.1 Synthetic Data Generators A synthetic data generator (SDG), without additional constraints, is defined as follows Definition 1 (SDG). An SDG, or a fooling algorithm, for D with sample complexity m( , δ) is an algorithm M that receives as input a sample S of points from X and parameters , δ such that the following holds: for every , δ > 0 and every target distribution preal, if S is an independent sample of size at least m( , δ) from preal then Pr [ IPMD(psyn, preal) < ] ≥ 1− δ, where psyn := M(S) is the distribution outputted by M , and the probability is taken over S ∼ (preal) m as well as over the randomness of M . We will say that a class is foolable if it can be fooled by an SDG algorithm whose sample complexity is poly(1 , 1 δ ). Foolability, without further constraints, comes with the following characterization which is an immediate corollary (or rather a reformulation) of the celebrated VC Theorem ([36]). Denote by Memp an algorithm that receives a sample S and returns Memp(S) := pS , the empirical distribution over S. Observation 1 ([36]). The following statements are equivalent for a class D ⊆ {0, 1}X : 1. D is PAC–learnable. 2. D is foolable. 3. D satisfies the uniform convergence property. 4. D has a finite VC-dimension. 5. Memp is a fooling algorithm for D with sample complexity m = O( log 1/δ 2 ). Observation 1 shows that foolability is equivalent to PAC-learnability (and in turn to finite VC dimension). We will later see analogous results for DP–Foolability (which is equivalent to differentially private PAC learnability) and Sequential–Foolability (which is equivalent to online learnability). We now discuss the two fundamental models that are the focus of this work – DP–Foolability and Sequential–Foolability. 3.2 DP–Synthetic Data Generators We next introduce the notion of a DP–synthetic data generator and DP–Foolability. As discussed, DP-SDGs have been the focus of study of several papers [9, 14, 22, 34, 23, 16]. Definition 2 (DP-SDG). A DP-SDG, or a DP-fooling algorithm M for a class D is an algorithm that receives as an input a finite sample S and two parameters ( , δ) and satisfies: • Differential Privacy. For every m, the restriction of M to input samples S of size m is (α(m), β(m))-differentially private, where α(m) = O(1) and β(m) is negligible. • Fooling. M fools D: there exists a sample complexity bound m = m( , δ) such that for every target distribution preal if S is a sample of at least m examples from preal then IPMD(psyn, preal) ≤ with probability at least 1− δ, where psyn is the output of M on the input sample S. We will say in short that a class D is DP– Foolable if there exists a DP-SDG for the class D with sample complexity m = poly(1/ , 1/δ). 3.3 Sequential–Synthetic Data Generators We now describe the second model of foolability which, as discussed, is the technical engine behind our proof of equivalence between DP-foolability and DP-learning. Sequential-SDGs A Sequential-SDG can be thought of as a sequential game between two players called the generator (denoted by G) and the discriminator (denoted by D). At the beginning of the game, the discriminator D receives the target distribution which is denoted by preal. The goal of the generator G is to find a distribution p such that p and preal are -indistinguishable with respect to some prespecified discriminating class D and an error parameter > 0, i.e. IPMD(p, preal) ≤ . We note that both players know D and . The game proceeds in rounds, where in each round t the generator G submits to the discriminator a candidate distribution pt and the discriminator replies according to the following rule: if IPMD(pt, preal) ≤ then the discriminator replies “WIN” and the game terminates. Else, the discriminator picks dt ∈ D such that |preal(dt)− pt(dt)| > , and sends dt to the generator along with a bit which indicates whether pt(dt) > preal(dt) or pt(dt) < preal(dt). Equivalently, instead of transmitting an extra bit, we assume that the discriminator always sends dt ∈ D ∪ (1−D) s.t. preal(dt)− pt(dt) > . (3) Definition 3 (Sequential–Foolability). Let > 0 and let D be a discriminating class. 1. D is called -Sequential–Foolable if there exists a generator G and a bound T = T ( ) such that G wins any discriminator D with any target distribution preal after at most T rounds. 2. The round complexity of Sequential–Fooling D is defined as the minimal upper bound T ( ) on the number of rounds that suffice to –Fool D. 3. D is called Sequential–Foolable if it is -Sequential foolable for every > 0 with T ( ) = poly(1/ ). In the next section we will see that if D is -Sequential–Foolabe for some fixed < 1/2 then it is Sequential–Foolable with round complexity T ( ) = O(1/ 2). 4 Results Our main result characterizes DP–Foolability in terms of basic notions from differential privacy and PAC learning. Theorem 1 (Characterization of DP–Fooling). The following statements are equivalent for a class D ⊆ {0, 1}X : 1. D is privately and properly learnable in the agnostic PAC setting. 2. D is DP–Foolable. 3. D is sanitizable. 4. D satisfies the private uniform convergence property. Theorem 1 shows a qualitative equivalence between the relevant four notions, quantitative bounds on the entailed sample complexity are provided in the full version [10]. The implication Item 3 =⇒ Item 1 was known prior to this work and was proven in [6] (albeit the pure case). The equivalence among Items 2 to 4 is natural and expected. Indeed, each of them expresses the existence of a private algorithm that publishes, privately, certain estimates of all functions in D. The fact that Item 1 implies the other three items is perhaps more surprising, and the main contribution of this work, and we show that Item 1 implies Item 2. Our proof of that exploits the Sequential framework. In a nutshell, we observe that a class that is both sequentially foolable and privately pac learnable is also DP-foolable: this result follows by constructing a sequential SDG that with a private discriminator, that is assumed to exists, combined with standard compositional and preprocessing arguments regarding the privacy of the generators output. Thus to prove the implication we only need to show that private PAC learning implies sequential foolability. This result follows from Corollary 2 that provides characterization of sequential foolable classes as well as a recent result by (author?) [1] that shows that private PAC learnable classes have finite Littlestone dimension. See the full version [10] for a complete proof. Private learnability versus private uniform convergence. The equivalence Item 1 ⇐⇒ Item 4 is between private learning and private uniform convergence. The non-private analogue of this equivalence is a cornerstone in statistical learning; it reduces the statistical challenge of minimizing an unknown population loss to an optimization problem of minimizing a known empirical estimate. In particular, it yields the celebrated Empirical Risk Minimization (ERM) principle: “Output h ∈ H that minimizes the empirical loss”. We therefore highlight this equivalence in the following corollary: Corollary 1 (Private proper learning = private uniform convergence). Let H ⊆ {0, 1}X . Then H is privately and properly PAC learnable if and only ifH satisfies the private uniform convergence property. Sequential–SDGs We next describe our characterization of Sequential-SDGs. As discussed, this characterization is the technical heart behind the equivalence between private PAC learning and DP-foolability. Nevertheless we believe that it may be of interest of its own right. We thus provide quantitative upper and lower bounds on the round complexity of Sequential-SDGs in terms of the Littlestone dimension (see [7] or the full version [10] for the exact definition). Theorem 2 (Quantitative round-complexity bounds). Let D be a discriminating class with dual Littlestone dimension `∗ and let T ( ) denote the round complexity of Sequential–Fooling D. Then, 1. T ( ) = O ( `∗ 2 log `∗ ) for every . 2. T ( ) ≥ ` ∗ 2 for every < 1 2 . It would be interesting to close the gap between the two bounds in terms of > 0, and we leave it for future work. To prove Item 1 we construct a generator with winning strategy which we outline in the full version [10]. A complete proof of Theorem 2 appears in the full version [10]. As a corollary we get the following characterization of Sequential–Foolability: Corollary 2 (Characterization of Sequential–Foolability). The following are equivalent for D ⊆ {0, 1}X : 1. D is Sequential–Foolable. 2. D is -Sequential–Foolable for some < 1/2. 3. D has a finite dual Littlestone dimension. 4. D has a finite Littlestone dimension. Corollary 2 follows directly from Theorem 2 (which gives the equivalences 1 ⇐⇒ 2 ⇐⇒ 3) and from [8] (which gives the equivalence 3 ⇐⇒ 4, see the full version [10] for further detail). Tightness of = 12 . The implication Item 2 =⇒ Item 1 can be seen as a boosting result: i.e. “weak” foolability for some fixed < 1/2 implies “strong” foolability for every . The following example demonstrates that the dependence on in Item 2 can not be improved beyond 12 : let X be the unit circle in R2, and let D consist of all arcs whose length is exactly half of the circumference. It is easy to verify that the uniform distribution µ over X satisfies IPMD(µ, preal) ≤ 12 for any target distribution preal (since µ(d) = 12 for all d ∈ D). Therefore D is ( = 1 2 )-Sequential–Foolable with round complexity T ( 12 ) = 1. On the other hand, D has an infinite Littlestone dimension and therefore is not Sequential–Foolable. Sequential-SDGs versus DP-SDGs So far we have introduced and characterized two formal setups for synthetic data generation. It is therefore natural to compare and seek connections between these two frameworks. We first note that the DP setting may only be more restrictive than the Sequential setting: Corollary 3 (DP–Foolability implies Sequential–Foolability). Let D be a class that is DP–Foolable. Then D has finite Littlestone dimension and in particular is Sequential–Foolable. Corollary 3 follows from Theorem 1: indeed, the latter yields that DP–Foolability is equivalent to Private agnostic proper -PAC learnability (PAP-PAC), and by [1] PAP-PAC learnability implies a finite Littlestone dimension which by Corollary 2 implies Sequential–Foolability. Towards a converse of Corollary 3. By the above it follows that the family of classes D that can be fooled by a DP algorithm is contained in the family of all Sequential–Foolable classes; specifically, those which admit a Sequential-SDG with a differentially private discriminator. We do not know whether the converse holds; i.e. whether “Sequential–Foolability =⇒ DP– Foolability”. Nevertheless, the implication “PAP-PAC learnability =⇒ DP–Foolability” (Theorem 1) can be regarded as an intermediate step towards this converse. Indeed, as discussed above, PAP-PAC learnablity implies Sequential–Foolablility. It is therefore natural to consider the following question, which is equivalent2 to the converse of Corollary 3: Question 1. Let D be a class that has finite Littlestone dimension. Is D properly and privately learnable in the agnostic PAC setting? A weaker form of this question – Whether every Littlestone class is privately PAC Learnable? – was posed by [1] as an open question (and was recently resolved in [11]). 5 Discussion In this work we develop a theory for two types of constrained-SDG, sequential and private. Let us now discuss SDGs more generally, and we broadly want to consider algorithms that observe data, sampled from some real-life distribution, and in turn generate new synthetic examples that resemble real-life samples, without any a-priori constraints. For example, consider an algorithm that receives as input some tunes from a specific music genre (e.g. jazz, rock, pop) and then outputs a new tune. Recently, there has been a remarkable breakthrough in the the construction of such SDGs with the introduction of the algorithmic frameworks of Generative Adversarial Networks (GANs) [18, 17], as well as Variational AutoEncoders (VAE) [26, 31]. In turn, the use of SDGs has seen many potential applications [24, 30, 38]. Here we follow a common interpretation of SDGs as IPM minimizers [2, 4]. However, it was also observed [2, 3] that there is a critical gap between the task of generating new synthetic data (such as new tunes) and the IPM minimization problem: In detail, Observation 1 shows that the IPM framework allows certain “bad" solutions that memorize. Specifically, let S be a sufficiently large independent sample from the target distribution and consider the empirical distribution as a candidate solution to the IPM minimization problem. Then, with high probability, the IPM distance between the empirical and the target distribution vanishes as |S| grows. To illustrate the problem, imagine that our goal is to generate new jazz tunes. Let us consider the discriminating class of all human music experts. The solution suggested above uses the empirical 2I.e. an affirmative answer to Question 1 is equivalent to the converse of Corollary 3. distribution and simply “generates" a tune from the training set3. This clearly misses the goal of generating new and original tunes but the IPM distance minimization framework does not discard this solution. For this reason we often invoke further restrictions on the SDG and consider constrainedSDGs. For example, [4] suggests to restrict the class of possible outputs psyn and shows that, under certain assumptions on the distribution preal, the right choice of class D leads to learning the true underlying distribution (in Wasserstein distance). In this work we explored two other types of constrained-SDGs, DP–SDGs and Sequential–SDGs, and we characterized the foolable classes in a distribution independent model, i.e. without making assumptions on the distribution preal. One motivation for studying these models, as well as the interest in a distribution independent setting, is the following underlying question: The output of Synthetic Data Generators should be new examples. But in what sense we require the output to be novel or distinct from the training set? How and in what sense we should avoid copying the training data or even outputting a memorized version of it? To answer such questions is of practical importance. For example, consider a company that wishes to automatically generate music or images to be used commercially. One approach could be to train an SDG, and then sell the generated output. What can we say about the output of SDGs in this context? Are the images generated by the SDG original? Are they copying the data? or breaching copyright? In this context, the differentially private setup comes with a very attractive interpretation that provides further motivation to study DP-SDGs, beyond preserving privacy of the dataset. To illustrate our interpretation of differential privacy as a criterion for originality consider the following situation: imagine that Lisa is a learning painter. She has learned to paint by observing samples of painting, produced by a mentor painter Mona. After a learning process, she draws a new painting L. Mona agrees that this new painting is a valid work of art, but Mona claims the result is not an original painting but a mere copy of a painting, say M , produced by Mona. How can Lisa argue that paint L is not a plagiary? The easiest argument would be that she had never observed M . However, this line of defence is not always realistic as she must observe some paintings. Instead, we will argue using the following thought experiment: What if Lisa never observed M? Might she still create L? If we could prove that this is the case, then one could argue similarly that L is not a palgiary. The last argument is captured by the notion of differential privacy. In a nutshell, a randomized algorithm that receives a sequence of data points x̄ as input is differentially private if removing/replacing a single data point in its input, does not affect its output y by much; more accurately, for any event E over the output y that has non-negligible probability on input x̄, then the probability remains non-negligible even after modifying one data point in x̄. The sequential setting also comes with an appealing interpretation in this context. A remarkable property of existing SDGs (e.g. GANs), that potentially reduces the likeliness of memorization, is that the generator’s access to the sample is masked. In more detail, the generator only has restricted access to the training set via feedback from a discriminator that observes real data vs. synthetic data. Thus, potentially, the generator may avoid degenerate solutions that memorize. Nevertheless, even though the generator is not given a direct access to the training data, it could still be that information about this data could "leak" through the feedback it receives from the discriminator. This raises the question of whether Sequential–Foolability can provide guarantees against memorization, and perhaps more importantly, in what sense? To start answering this question part of this work aims to understand the interconnection between the task of Sequential-Fooling and the task of DP–Fooling. Finally, the above questions also motivate our interest in a distribution-independent setting, that avoids assumptions on the distribution preal which we often don’t know. In detail, if we only cared about the resemblence between preal and psyn then we may be content with any algorithm that performs well in practice regardless of whether certain assumptions that we made in the analysis hold or not. But, if we care to obtain guarantees against copying or memorizing, then these should principally hold. And thus we should prefer to obtain our guarantees without too strong assumptions on the distribution preal. 3There are at most 7 · 109 music experts in the world. Hence, by standard concentration inequalities a sample of size roughly 9 2 log 10 suffices to achieve IPM distance at most with high probability. Acknowledgments and Disclosure of Funding R.L is supported by an ISF grant no. 2188/20 and partially funded by an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google. S.M is supported by the Israel Science Foundation (grant No. 1225/20), by an Azrieli Faculty Fellowship, and by a grant from the United States - Israel Binational Science Foundation (BSF). Part of this work was done while the author was at Google Research. Broader Impact There are no foreseen ethical or societal consequences for the research presented herein.
1. What is the focus and contribution of the paper regarding differentially private synthetic data generation? 2. What are the strengths of the proposed theoretical framework, particularly in unifying private PAC learning and DP synthetic data generation? 3. What are the weaknesses of the paper, especially regarding the applicability of the framework and the sample complexity gap? 4. Do you have any questions or concerns regarding the approach used in the paper? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper provides a theoretical framework to study how many IID samples from some target distribution are needed to generate differentially private (DP) synthetic data that is indistinguishable from the actual data w.r.t a fixed class of statistical queries. Since prior solutions consider finite query classes, they generalize previous techniques to work for infinite query classes. Then the main question they address is what classes of statistical queries can be privately approximated with a polynomial number of IID samples from the actual distribution, or by their terminology, what classes can be privately fooled (DP-foolable classes). They show that the classes that can be privately fooled are the same classes that are privately and proper PAC learnable. They extend their results by providing class equivalences to other learning tasks. The sample complexity bounds in previous work (for finite classes) depend on the size of the query class. Here they show that if the query class is privately (proper) PAC learnable, then the sample complexity is independent of the cardinally of the query class. Instead, their bounds depend on the Littlestone dimension of the query class. The approach in prior work on synthetic data generation consists of a sequential game between a data generator player and a discriminator. On each round, the generator proposes a distribution to approximate the real distribution. Then the generator finds a query (from a fixed query class) with a high discrepancy between the real and fake data. A query class is sequentially foolable if the game converges to finding a distribution that is close to the true target distribution. The main result from theorem 1 is: A class that is private (proper) PAC learnable implies finite Littlestone dimension (this follows from Alon et al. 2018); therefore, it is sequentially foolable. Then a class that is sequentially foolable and private (proper) PAC learnable can be used to construct the converging sequential game, as described in the above paragraph. Hence a private (proper) PAC learnable is DP-foolable. Strengths The paper contributes a helpful theoretical framework unifying private PAC learning with the theory behind DP synthetic data generation. They provide the first sample complexity that does not depend on the size of the query class. Weaknesses The framework in this paper seems to apply only to binary classes, but we might also be interested in multi-label and continuous classes. The lower bound of theorem 2 seems very weak. The upper bound applies to all epsilon, but the lower bound to epsilon less than 1/2, and the lower bound seems not to have a dependence on epsilon. Perhaps more details and explanations should be included in the main paper to understand the problem's difficulty. Also, authors may want to include some discussion about closing the sample complexity gap.
NIPS
Title Telescoping Density-Ratio Estimation Abstract Density-ratio estimation via classification is a cornerstone of unsupervised learning. It has provided the foundation for state-of-the-art methods in representation learning and generative modelling, with the number of use-cases continuing to proliferate. However, it suffers from a critical limitation: it fails to accurately estimate ratios p/q for which the two densities differ significantly. Empirically, we find this occurs whenever the KL divergence between p and q exceeds tens of nats. To resolve this limitation, we introduce a new framework, telescoping density-ratio estimation (TRE), that enables the estimation of ratios between highly dissimilar densities in high-dimensional spaces. Our experiments demonstrate that TRE can yield substantial improvements over existing single-ratio methods for mutual information estimation, representation learning and energy-based modelling. 1 Introduction Unsupervised learning via density-ratio estimation is a powerful paradigm in machine learning [60] that continues to be a source of major progress in the field. It consists of estimating the ratio p/q from their samples without separately estimating the numerator and denominator. A common way to achieve this is to train a neural network classifier to distinguish between the two sets of samples, since for many loss functions the ratio p/q can be extracted from the optimal classifier [60, 21, 41]. This discriminative approach has been leveraged in diverse areas such as covariate shift adaptation [59, 63], energy-based modelling [22, 4, 53, 64, 36, 19], generative adversarial networks [15, 47, 43], bias correction for generative models [20, 18], likelihood-free inference [50, 62, 8, 13], mutualinformation estimation [2], representation learning [29, 30, 48, 25, 27], Bayesian experimental design [33, 34] and off-policy reward estimation in reinforcement learning [39]. Across this diverse set of applications, density-ratio based methods have consistently yielded state-of-the-art results. Despite the successes of discriminative density-ratio estimation, many existing loss functions share a severe limitation. Whenever the ‘gap’ between p and q is large, the classifier can obtain almost perfect accuracy with a relatively poor estimate of the density ratio. We refer to this failure mode as the density-chasm problem—see Figure 1a for an illustration. We observe empirically that the density-chasm problem manifests whenever the KL-divergence DKL(p ‖ q) exceeds ∼ 20 nats1. This observation accords with recent findings in the mutual information literature regarding the limitations of density-ratio based estimators of the KL [40, 52, 57]. In high dimensions, it can easily occur that two densities p and q will have a KL-divergence measuring in the hundreds of nats, and so the ratio may be virtually intractable to estimate with existing techniques. In this paper, we propose a new framework for estimating density-ratios that can overcome the density-chasm problem. Our solution uses a ‘divide-and-conquer’ strategy composed of two steps. The first step is to gradually transport samples from p to samples from q, creating a chain of intermediate datasets. We then estimate the density-ratio between consecutive datasets along this 1‘nat’ being a unit of information measured using the natural logarithm (base e) 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 100 10 1 10 2 0 10 2 10 1 100 x 0 10 1 101 103 105 de ns ity /ra tio v al ue p(x) q(x) p(x) q(x) 4 6 8 10 12 14 0.0 0.5 1.0 1.5 lo gi st ic lo ss n( ) * TRE (a) Density-ratio estimation between an extremely peaked Gaussian p (σ = 10−6) and a broad Gaussian q (σ = 1) using a single-parameter quadratic classifier (as detailed in section 4.1). Left: A log-log scale plot of the densities and their ratio. Note that p(x) is not visible, since the ratio overlaps it. Right: the solid blue line is the finite-sample logistic loss (Eq. 2) for 10,000 samples. Despite the large sample size, the minimiser (dotted blue line) is far from optimal (dotted black line). The dotted red line is the newly introduced TRE solution, which almost perfectly overlaps with the dotted black line. p q = p p1 × p1 p2 × p2 p3 × p3 q 100 100 x 0 106 de ns ity /ra tio v al ue p(x) p1(x) p(x) p1(x) 10 12 14 0 0.0 1.5 lo gi st ic lo ss n 0( 0) 0 * 0 100 100 x 0 103 p1(x) p2(x) p1(x) p2(x) 6 7 8 9 1 n 1( 1) 1 * 1 100 100 x 0 101 p2(x) p3(x) p2(x) p3(x) 3 4 5 2 n 2( 2) 2 * 2 100 100 x 0 101 p3(x) q(x) p3(x) q(x) 0.5 1.0 1.5 2.0 3 n 3( 3) 3 * 3 (b) Telescoping density-ratio estimation applied to the problem in (a), using the same 10,000 samples from p and q. Top row: a collection of ratios, where p1, p2 and p3 are constructed by deterministically interpolating between samples from p and q. Bottom row: the logistic loss function for each ratio estimation problem. Observe that the finite-sample minimisers of each objective (red dotted lines) are either close to or exactly overlapping their optima (black dotted lines). After estimating each ratio, we then combine them by taking their product. Figure 1: Illustration of standard density-ratio estimation vs. telescoping density-ratio estimation. chain, as illustrated in the top row of Figure 1b. Unlike the original ratio p/q, these ‘chained ratios’ can be accurately estimated via classification (see bottom row). Finally, we combine the chained ratios via a telescoping product to obtain an estimate of the original density-ratio p/q. Thus, we refer to the method as Telescoping density-Ratio Estimation (TRE). We empirically demonstrate that TRE can accurately estimate density-ratios using deep neural networks on high-dimensional problems, significantly outperforming existing single-ratio methods. We show this for two important applications: representation learning via mutual information (MI) estimation and the learning of energy-based models (EBMs). In the context of mutual information estimation, we show that TRE can accurately estimate large MI values of 30+ nats, which is recognised to be an outstanding problem in the literature [52]. However, obtaining accurate MI estimates is often not our sole objective; we also care about learning representations from e.g. audio or image data that are useful for downstream tasks such as classification or clustering. To this end, our experimental results for representation learning confirm that TRE offers substantial gains over a range of existing single-ratio baselines. In the context of energy-based modelling, we show that TRE can be viewed as an extension of noisecontrastive estimation [22] that more efficiently scales to high-dimensional data. Whilst energy-based modelling has been a topic of interest in the machine learning community for some time [56], there has been a recent surge of interest, with a wave of new methods for learning deep EBMs in high dimensions [10, 6, 58, 38, 17, 68]. These methods have shown promising results for image and 3D shape synthesis [66], hybrid modelling [16], and modelling of exchangeable data [67]. However, many of these methods result in expensive/challenging optimisation problems, since they rely on approximate Markov chain Monte Carlo (MCMC) sampling during learning [10, 16, 68], or on adversarial optimisation [6, 17, 68]. In contrast, TRE requires no MCMC during learning and uses a well-defined, non-adversarial, objective function. Moreover, as we show in our mutual information experiments, TRE is applicable to discrete data, whereas all other recent EBM methods only work for continuous random variables. Applicability to discrete data makes TRE especially promising for domains such as natural language processing, where noise-contrastive estimation has been widely used [42, 35, 1]. 2 Discriminative ratio estimation and the density-chasm problem Suppose p and q are two densities for which we have samples, and that q(x) > 0 whenever p(x) > 0. We can estimate the density-ratio r(x) = p(x)/q(x) by training a classifier to distinguish samples from p and q [23, 60, 22]. There are many choices for the loss function of the classifier [60, 51, 21, 41, 52], but in this paper we concentrate on the widely used logistic loss L(θ) = −Ex1∼p log ( r(x1;θ) 1 + r(x1;θ) ) − Ex2∼q log ( 1 1 + r(x2;θ) ) , (1) where r(x;θ) is a non-negative ratio estimating model. To enforce non-negativity, r is typically expressed as the exponential of an unconstrained function such as a neural network. For a correctly specified model, the minimiser of this loss, θ∗, satisfies r(x;θ∗) = p(x)/q(x), without needing any normalisation constraints [22]. Other classification losses do not always have this self-normalising property, but only yield an estimate proportional to the true ratio—see e.g. [52]. The density-chasm problem We experimentally find that density-ratio estimation via classification typically works well when p and q are ‘close’ e.g. the KL divergence between them is less than ∼ 20 nats. However, for sufficiently large gaps, which we refer to as density-chasms, the ratio estimator is often severely inaccurate. This raises the obvious question: what is the cause of such inaccuracy? There are many possible sources of error: the use of misspecified models, imperfect optimisation algorithms, and inaccuracy stemming from Monte Carlo approximations of the expectations in (1). We argue that this mundane final point—Monte Carlo error due to finite sample size—is actually sufficient for inducing the densitychasm problem. Figure 1a depicts a toy problem for which the model is well-specified, and because it is 1-dimensional (w.r.t. θ), optimisation is straightforward using grid-search. And yet, if we use a sample size of n = 10, 000 and minimise the finite-sample loss Ln(θ) = n∑ i=1 − log ( r(xi1; θ) 1 + r(xi1; θ) ) − log ( 1 1 + r(xi2; θ) ) , xi1 ∼ p, xi2 ∼ q, (2) we obtain an estimate θ̂ that is far from the asymptotic minimiser θ∗ = arg min L(θ). Repeating this same experiment for different sample sizes, we can empirically measure the method’s sample efficiency, which is plotted as the blue curve in Figure 2. For the regime plotted, we see that an exponential increase in sample size only yields a linear decrease in estimation error. This empirical result is concordant with theoretical findings that density-ratio based lower bounds on KL divergences are only tight for sample sizes exponential in the the number of nats [40]. Whilst we focus on the logistic loss, we believe the density chasm problem is a broader phenomenon. As shown in the appendix, the issues identified in Figure 1 and the sample inefficiency seen in Figure 2 also occur for other commonly used discriminative loss functions. Thus, when faced with the density-chasm problem, simply increasing the sample size is a highly inefficient solution and not always possible in practice. This begs the question: is there a more intelligent way of using a fixed set of samples from p and q to estimate the ratio? 3 Telescoping density-ratio estimation We introduce a new framework for estimating density-ratios p/q that can overcome the densitychasm problem in a sample-efficient manner. Intuitively, the density-chasm problem arises whenever classifying between p and q is ‘too easy’. This suggests that it may be fruitful to decompose the task into a collection of harder sub-tasks. For convenience, we make the notational switch p ≡ p0, q ≡ pm (which we will keep going forward), and expand the ratio via a telescoping product p0(x) pm(x) = p0(x) p1(x) p1(x) p2(x) . . . pm−2(x) pm−1(x) pm−1(x) pm(x) , (3) where, ideally, each pk is chosen such that a classifier cannot easily distinguish it from its two neighbouring densities. Instead of attempting to build one large ‘bridge’ (i.e. density-ratio) across the density-chasm, we propose to build many small bridges between intermediate ‘waymark’ distributions. The two key components of the method are therefore: 1. Waymark creation. We require a method for gradually transporting samples {x10, . . . ,xn0} from p0 to samples {x1m, . . . ,xnm} from pm. At each step in the transportation, we obtain a new dataset {x1k, . . . ,xnk} where k ∈ {0, . . .m}. Each intermediate dataset can be thought of as samples from an implicit distribution pk, which we refer to as a waymark distribution. 2. Bridge-building: A method for learning a set of parametrised density-ratios between consecutive pairs of waymarks rk(x;θk) ≈ pk(x)/pk+1(x) for k = 0, . . . ,m− 1, where each bridge rk is a non-negative function. We refer to these ratio estimating models as bridges. Note that the parameters of the bridges, {θk}m−1k=0 , can be totally independent or they can be partially shared. An estimate of the original ratio is then given by the product of the bridges r(x;θ) = m−1∏ k=0 rk(x;θk) ≈ m−1∏ k=0 pk(x) pk+1(x) = p0(x) pm(x) , (4) where θ is the concatenation of all θk vectors. Because of the telescoping product in (4), we refer to the method as Telescoping density-Ratio Estimation (TRE). TRE has conceptual ties with a range of methods in optimisation, statistical physics and machine learning that leverage sequences of intermediate distributions, typically between a complex density p and a simple tractable density q. Of particular note are the methods of Simulated Annealing [32], Bridge Sampling & Path Sampling [14] and Annealed Importance Sampling (AIS) [45]. Whilst none of these methods estimate density ratios, and thus serve fundamentally different purposes, they leverage similar ideas. In particular, AIS also computes a chain of density-ratios between artificially constructed intermediate distributions. It typically does this by first defining explicit expressions for the intermediate densities, and then trying to obtain samples via MCMC. In contrast, TRE implicitly defines the intermediate distributions via samples and then tries to learn the ratios. Additionally, in TRE we would like to evaluate the learned ratios in (4) at the same input x while AIS should only evaluate a ratio rk at ‘local’ samples from e.g. pk. 3.1 Waymark creation In this paper, we consider two simple, deterministic waymark creation mechanisms: linear combinations and dimension-wise mixing. We find these mechanisms yield good performance and are computationally cheap. However, we note that other mechanisms are possible, and are a promising topic for future work. Linear combinations. Given a random pair x0 ∼ p0 and xm ∼ pm, define the kth waymark via xk = √ 1− α2k x0 + αkxm, k = 0, . . . ,m (5) where the αk form an increasing sequence from 0 to 1, which control the distance of xk from x0. For all of our experiments (except, for illustration purposes, those depicted in Figure 1), each dimension of p0 and pm has the same variance2 and the coefficients in (5) are chosen to preserve this variance, with the goal being to match basic properties of the waymarks and thereby make consecutive classification problems harder. Dimension-wise mixing. An alternative way to ‘mix’ two vectors is to concatenate different subsets of their dimensions. Given a d-length vector x, we can partition it into m sub-vectors of length d/m, assuming d is divisible by m. We denote this as x = (x[1], . . . ,x[m]), where each x[i] has length d/m. Using this notation, define the kth waymark via xk = (xm[1], . . . , xm[k], x0[k + 1], . . . , x0[m]) k = 0, . . . ,m (6) where, again, x0 ∼ p0 and xm ∼ pm are randomly paired. Number and spacing. Given these two waymark generation mechanisms, we still need to decide the number of waymarks, m, and, in the case of linear combinations, how the αk are spaced in the unit interval. We treat these quantities as hyperparameters, and demonstrate in the experiments (Section 4) that tuning them is feasible with a limited search budget. 3.2 Bridge-building Each bridge rk(x;θk) in (4) can be learned via binary classification using a logistic loss function as described in Section 2. Solving this collection of classification tasks is therefore a multi-task learning (MTL) problem—see [55] for a review. Two key questions in MTL are how to share parameters and how to define a joint objective function. Parameter sharing. We break the construction of the bridges rk(x;θk) into two stages: a (mostly) shared body computing hidden vectors fk(x)3, followed by bridge-specific heads. The body fk is a deep neural network with shared parameters and pre-activation per-hidden-unit scales and biases for each bridge (see appendix for details). Similar parameter sharing schemes have been successfully used in the multi-task learning literature [7, 11]. The heads map the hidden vectors fk(x) to the scalar log rk(x;θk). We use either linear or quadratic mappings depending on the application; the precise parameterisation is stated in each experiment section. TRE loss function. The TRE loss function is given by the average of the m logistic losses LTRE(θ) = 1 m m−1∑ k=0 Lk(θk), (7) Lk(θk) = −Exk∼pk log ( rk(xk;θk) 1 + rk(xk;θk) ) − Exk+1∼pk+1 log ( 1 1 + rk(xk+1;θk) ) . (8) This simple unweighted average works well empirically. More sophisticated multi-task weighting schemes exist [5], but preliminary experiments suggested they were not worth the extra complexity. An important aspect of this loss function is that each ratio estimator rk sees different samples during training. In particular, r0 sees samples close to the real data i.e. from p0 and p1, while the final ratio rm−1 sees data from pm−1 and pm. This creates a potential mismatch between training and deployment, since after learning, we would like to evaluate all ratios at the same input x. In our experiments, we do not find this mismatch to be a problem, suggesting that each ratio, despite seeing different inputs during training, is able to generalise to new test points. We speculate that this generalisation is encouraged by parameter sharing, which allows each ratio-estimator to be indirectly influenced by samples from all waymark distributions. Nevertheless, we think a deeper analysis of this issue of generalisation deserves further work. 3.3 TRE applied to mutual information estimation The mutual information (MI) between two random variables u and v can be written as I(u,v) = Ep(u,v) [ log r(u,v) ] , r(u,v) = p(u,v) p(u)p(v) . (9) 2For MI estimation this always holds, for energy-based modelling this is enforceable via the choice of pm. 3For simplicity, we suppress the parameters of fk, and will do the same for rk in the experiments section. Given samples from the joint density p(u,v), one obtains samples from the product-of-marginals p(u)p(v) by shuffling the v vectors across the dataset. This then enables standard density-ratio estimation to be performed. For TRE, we require waymark samples. To generate these, we take a sample from the joint, x0 = (u,v0), and a sample from the product-of-marginals, xm = (u,vm), where u is held fixed and only v is altered. We then apply a waymark construction mechanism from Section 3.1 to generate xk = (u,vk), for k = 0, . . . ,m. 3.4 TRE applied to energy-based modelling An energy-based model (EBM) is a flexible parametric family {φ(x;θ)} of non-negative functions, where each function is proportional to a probability-density. Given samples from a data distribution with density p(x), the goal of energy-based modelling is to find a parameter θ∗ such that φ(x;θ∗) is ‘close’ to cp(x), for some positive constant c. In this paper, we consider EBMs of the form φ(x;θ) = r(x;θ)q(x), where q is a known density (e.g. a Gaussian or normalising flow) that we can sample from, and r is an unconstrained positive function. Given this parameterisation, the optimal r simply equals the density-ratio p(x)/q(x), and hence the problem of learning an EBM becomes the problem of estimating a density-ratio, which can be solved via TRE. We note that, since TRE actually estimates a product of ratios as stated in Equation 4, the final EBM will be a product-of-experts model [26] of the form φ(x;θ) = ∏m−1 k=0 rk(x;θk)q(x). The estimation of EBMs via density-ratio estimation has been studied in multiple prior works, including noise-contrastive estimation (NCE) [22], which has many appealing theoretical properties [22, 54, 65]. Following NCE, we will refer to the known density q as the ‘noise distribution’. 4 Experiments We include two toy examples illustrating both the correctness of TRE and the fact that it can solve problems which verge on the intractable for standard density ratio estimation. We then demonstrate the utility of TRE on two high-dimensional complex tasks, providing clear evidence that it substantially improves on standard single-ratio baselines. For experiments with continuous random variables, we use the linear combination waymark mechanisms in (5); otherwise, for discrete variables, we use dimension-wise mixing (6). For the linear combination mechanism, we collapse the αk into a single spacing hyperparameter, and grid-search over this value, along with the number of waymarks. Details are in the appendix. 4.1 1d peaked ratio The basic setup is stated in Figure 1a. For TRE, we use quadratic bridges of the form log rk(x) = wkx 2 + bk, where bk is set to its ground truth value (as derived in appendix), and wk is reparametrised as exp(θk) to avoid unnecessary log-scales in Figure 1. The single ratio-estimation results use the same parameterisation (dropping the subscript k). Figure 2 shows the full results. These sample efficiency curves clearly demonstrate that, across all sample sizes, TRE is significantly more accurate than single ratio estimation. In fact, TRE obtains a better solution with 100 samples than single-ratio estimation does with 100,000 samples: a three orders of magnitude improvement. 4.2 High-dimensional ratio with large MI This toy problem has been widely used in the mutual information literature [2, 52]. Let x ∈ R2d be a Gaussian random variable, with block-diagonal covariance matrix, where each block is 2×2 with 1 on the diagonal and 0.8 on the off-diagonal. We then estimate the ratio between this Gaussian and a standard normal distribution. This problem can be viewed as an MI estimation task or an energy-based modelling task—see the appendix for full details. We apply TRE using quadratic bridges of the form: log rk(x) = xTWkx+ bk. The results in Figure 3 show that single ratio estimation becomes severely inaccurate for MI values greater than 20 nats. In contrast, TRE can accurately estimate MI values as large as 80 nats for 320 dimensional variables. To our knowledge, TRE is the first discriminative MI estimation method that can scale this gracefully. 4.3 MI estimation & representation learning on SpatialMultiOmniglot We applied TRE to the SpatialMultiOmniglot problem taken from [49]4 where characters from Omniglot are spatially stacked in an n× n grid, where each grid position contains characters from a fixed alphabet. Following [49], the individual pixel values of the characters are not considered random variables; rather, we treat the grid as a collection of n2 categorical random variables whose realisations are the characters from the respective alphabet. Pairs of grids, (u,v), are then formed such that corresponding grid-positions contain alphabetically consecutive characters. Given this setup, the ground truth MI can be calculated (see appendix). Each bridge in TRE uses a separable architecture [52] given by log rk(u,v) = g(u)TWkfk(v), where g and fk are 14-layer convolutional ResNets [24] and fk uses the parameter-sharing scheme described in Section 3.2. We note that separable architectures are standard in the MI-based representation learning literature [52]. We construct waymarks using the dimension-wise mixing mechanism (6) with m = n2 (i.e. one dimension is mixed at a time). After learning, we adopt a standard linear evaluation protocol (see e.g. [48]), where we train supervised linear classifiers on top of the output layer g(u) to predict the alphabetic position of each character in u. We compare our results to those reported in [49]. Specifically, we report their baseline method— contrastive predictive coding (CPC) [48], a state-of-the-art representation learning method based on single density-ratio estimation—along with their variant, Wasserstein predictive coding (WPC). Figure 4 shows the results. The left plot shows that only TRE can accurately estimate high MI values of ∼ 35 nats5. The representation learning results (right) show that all single density-ratio baselines degrade significantly in performance as we increase the number of characters in a grid (and hence increase the MI). In contrast, TRE always obtains greater than 97% accuracy. 4.4 Energy-based modelling on MNIST As explained in Section 3.4, TRE can be used estimate an energy-based model of the form φ(x;θ) =∏m−1 k=0 rk(x;θk)q(x), where q is a pre-specified ‘noise’ distribution from which we can sample, and the product of ratios is given by TRE. In this section, we demonstrate that such an approach can 4We mirror their experimental setup as accurately as possible, however we were unable to obtain their code. 5[49] do not provide MI estimates for CPC & WPC, but [52] shows that they are bounded by log batch-size. Gaussian Copula RQ-NSF Figure 5: MNIST samples. Each row pertains to a particular noise distribution. The first block shows exact samples from that distribution. The second & third blocks show MCMC samples from an EBM learned with NCE & TRE, respectively. scale to high-dimensional data, by learning energy-based models of the MNIST handwritten digit dataset [37]. We consider three choices of the noise distribution: a multivariate Gaussian, a Gaussian copula and a rational-quadratic neural spline flow (RQ-NSF) [12] with coupling layers [9, 31]. Each distribution is first fitted to the data via maximum likelihood estimation—see appendix for details. Each of these noise distributions can be expressed as an invertible transformation of a standard normal distribution. That is, each random variable has the form F (z), where z ∼ N (0, I). Since F already encodes useful information about the data distribution, it makes sense to leverage this when constructing the waymarks in TRE. Specifically, we can generate linear combination waymarks via (5) in z-space, and then map them back to x-space, giving xk = F ( √ 1− α2k F −1(x0) + αkF −1(xm)). (10) For a Gaussian, F is linear, and hence (10) is identical to the original waymark mechanism in (5). We use the parameter sharing scheme from Section 3.2 together with quadratic heads. This gives log rk(x) = −fk(x)TWkfk(x)− fk(x)Tbk − ck, where we set fk to be an 18-layer convolutional Resnet and constrain Wk to be positive definite. This constraint enforces an upper limit on the log-density of the EBM, which has been useful in other work [44, 46], and improves results here. We evaluate the learned EBMs quantitatively via estimated log-likelihood in Table 1 and qualitatively via random samples from the model in Figure 5. For both of these evaluations, we employ NUTS [28] to perform annealed MCMC sampling as explained in the appendix. This annealing procedure provides two estimators of the log-likelihood: the Annealed Importance Sampling (AIS) estimator [45] and the more conservative Reverse Annealed Importance Sampling Estimator (RAISE) [3]. The results in Table 1 and Figure 5 show that single ratio estimation performs poorly in highdimensions for simple choices of the noise distribution, and only works well if we use a complex neural density-estimator (RQ-NSF). This illustrates the density-chasm problem explained in Section 2. In contrast, TRE yields improvements for all choices of the noise, as measured by the approximate log-likelihood and the visual fidelity of the samples. TRE’s improvement over the Gaussian noise distribution is particularly large: the bits per dimension (bpd) is around 0.66 lower, corresponding to an improvement of roughly 360 nats. Moreover, the samples are significantly more coherent, and appear to be of higher fidelity than the RQ-NSF samples6, despite the fact that TRE (with Gaussian noise) has a worse log-likelihood. This final point is not contradictory since log-likelihood and sample quality are known to be only loosely connected [61]. 6We emphasise here that the quality of the RQ-NSF model depends on the exact architecture. A larger model may yield better samples. Thus, we do not claim that TRE generally yields superior results in any sense. Finally, we analysed the sensitivity of our results to the construction of the waymarks and include the results in the appendix. Using TRE with a copula noise distribution as an illustrative case, we found that varying the number of waymarks between 5-30 caused only minor changes in the approximate log-likelihoods, no greater than 0.03 bpd. We also found that if we omit the z-space waymark mechanism in (10), and work in x-space, then TRE’s negative log-likelihood increases to 1.33 bpd, as measured by RAISE. This is still significantly better than single-ratio estimation, but does show that the quality of the results depends on the exact waymark mechanism. 5 Conclusion We introduced a new framework—Telescoping density-Ratio Estimation (TRE)—for learning densityratios that, unlike existing discriminative methods, can accurately estimate ratios between extremely different densities in high-dimensions. TRE admits many exciting directions for future work. Firstly, we would like a deeper theoretical understanding of why it is so much more sample-efficient than standard density-ratio estimation. The relationship between TRE and standard methods is structurally similar to the relationship between annealed importance sampling and standard importance sampling. Thus, exploring this connection further may be fruitful. Relatedly, we believe that TRE would benefit from further research on waymark mechanisms. We presented simple mechanisms that have clear utility for both discrete and continuous-valued data. However, we suspect more sophisticated choices may yield improvements, especially if one can leverage domain or task-specific assumptions to intelligently decompose the density-ratio problem. Lastly, whilst this paper has focused on the logistic loss, it would be interesting to more deeply investigate TRE with other discriminative loss functions. Broader Impact As outlined in the introduction, density-ratio estimation is a foundational tool in machine learning with diverse applications. Our work, which improves density-ratio estimation, may therefore increase the scope and power of a wide spectrum of techniques used both in research and real-world settings. The broad utility of our contribution makes it challenging to concretely assess the societal impact of the work. However, we do discuss here two applications of density-ratio estimation with obvious potential for positive & negative impacts on society. Generative Adversarial Networks [15] are a popular class of models which are often trained via density-ratio estimation and are able to generate photo-realistic image/video content. To the extent that TRE can enhance GAN training (a topic we do not treat in this paper), our work could conceivably lead to enhanced ‘deepfakes’, which can be maliciously used in fake-news or identity fraud. More positively, density-ratio estimation is being used to correct for dataset bias, including the presence of skewed demographic factors like race and gender [18]. While we are excited about such applications, we emphasise that density-ratio based methods are not a panacea; it is entirely possible for the technique to introduce new biases when correcting for existing ones. Future work should continue to be mindful of such a possibility, and look for ways to address the issue if it arises. Acknowledgments and Disclosure of Funding Benjamin Rhodes was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh. Kai was supported by Edinburgh Huawei Research Lab in the University of Edinburgh, funded by Huawei Technologies Co. Ltd.
1. What is the focus and contribution of the paper regarding density ratio estimation? 2. What are the strengths of the proposed approach, particularly in its implementation and application to downstream tasks? 3. What are the weaknesses of the paper, especially regarding its presentation and empirical results for energy-based modeling? 4. Do you have any questions or concerns about the paper's idea and its difference from other approaches like transporting densities? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper introduces a clever idea for estimating density ratios via discriminative methods in a regime where they counter-intuitively fail: when distributions are easy to distinguish because they are far apart. Important problems like KL divergence and mutual information estimation require modeling density ratios, and while discriminative approaches have been successful in these cases, they are widely known to have the problem that is addressed by this paper. The paper suggests bridging the gap between distributions with a sequence of intermediate distributions, and explores the best way to construct the bridge and to parametrize the family of discriminators that is now required. The paper demonstrates reasonable results on two relevant downstream tasks: mutual information estimation and density modeling. Strengths - The basic idea for bridging the gap is elegant and seems to differ in implementation from the standard "bridging" techniques in annealed importance sampling, though more discussion of that would be appreciated. - I have encountered versions of the "density chasm" problem, as the authors formulate it, many times in information estimation, so I find new solutions to be useful. (Approaches based on transporting densities like Wasserstein, are not as easily combined with other methods as this approach, in my experience). - Mutual information estimation and density modeling were sensible choices for evaluation. While the MI estimation tasks look somewhat artificial, I appreciate the difficulty of finding non-trivial examples where the ground truth is known. It is good to see the approach shines in this particular case. Weaknesses The main issue for me was the presentation and empirical results for the energy-based modeling. - MNIST is rather easy to model, so while the results were instructive as demonstrations, they don't really hint whether the approach can work in more complicated domains. - My main confusion was with the framing of the approach as energy-based modeling. In the first few mentions, it wasn't obvious why the approach was relevant for this problem. Then the introduction of EBMs is very brief and gives no reminder of the noise contrastive estimation setup. I had to refresh myself on NCE before this section made sense. I think it would be better to say, throughout, that this idea can be used for NCE, and then remind people that NCE is a way to learn unnormalized/energy-based models. NCE is somewhat special compared to the EBMs that first came to mind, since you have to specify the noise distribution. Since I hadn't looked at NCE lately, the sudden introduction of noise distributions without context was hard to understand.
NIPS
Title Telescoping Density-Ratio Estimation Abstract Density-ratio estimation via classification is a cornerstone of unsupervised learning. It has provided the foundation for state-of-the-art methods in representation learning and generative modelling, with the number of use-cases continuing to proliferate. However, it suffers from a critical limitation: it fails to accurately estimate ratios p/q for which the two densities differ significantly. Empirically, we find this occurs whenever the KL divergence between p and q exceeds tens of nats. To resolve this limitation, we introduce a new framework, telescoping density-ratio estimation (TRE), that enables the estimation of ratios between highly dissimilar densities in high-dimensional spaces. Our experiments demonstrate that TRE can yield substantial improvements over existing single-ratio methods for mutual information estimation, representation learning and energy-based modelling. 1 Introduction Unsupervised learning via density-ratio estimation is a powerful paradigm in machine learning [60] that continues to be a source of major progress in the field. It consists of estimating the ratio p/q from their samples without separately estimating the numerator and denominator. A common way to achieve this is to train a neural network classifier to distinguish between the two sets of samples, since for many loss functions the ratio p/q can be extracted from the optimal classifier [60, 21, 41]. This discriminative approach has been leveraged in diverse areas such as covariate shift adaptation [59, 63], energy-based modelling [22, 4, 53, 64, 36, 19], generative adversarial networks [15, 47, 43], bias correction for generative models [20, 18], likelihood-free inference [50, 62, 8, 13], mutualinformation estimation [2], representation learning [29, 30, 48, 25, 27], Bayesian experimental design [33, 34] and off-policy reward estimation in reinforcement learning [39]. Across this diverse set of applications, density-ratio based methods have consistently yielded state-of-the-art results. Despite the successes of discriminative density-ratio estimation, many existing loss functions share a severe limitation. Whenever the ‘gap’ between p and q is large, the classifier can obtain almost perfect accuracy with a relatively poor estimate of the density ratio. We refer to this failure mode as the density-chasm problem—see Figure 1a for an illustration. We observe empirically that the density-chasm problem manifests whenever the KL-divergence DKL(p ‖ q) exceeds ∼ 20 nats1. This observation accords with recent findings in the mutual information literature regarding the limitations of density-ratio based estimators of the KL [40, 52, 57]. In high dimensions, it can easily occur that two densities p and q will have a KL-divergence measuring in the hundreds of nats, and so the ratio may be virtually intractable to estimate with existing techniques. In this paper, we propose a new framework for estimating density-ratios that can overcome the density-chasm problem. Our solution uses a ‘divide-and-conquer’ strategy composed of two steps. The first step is to gradually transport samples from p to samples from q, creating a chain of intermediate datasets. We then estimate the density-ratio between consecutive datasets along this 1‘nat’ being a unit of information measured using the natural logarithm (base e) 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 100 10 1 10 2 0 10 2 10 1 100 x 0 10 1 101 103 105 de ns ity /ra tio v al ue p(x) q(x) p(x) q(x) 4 6 8 10 12 14 0.0 0.5 1.0 1.5 lo gi st ic lo ss n( ) * TRE (a) Density-ratio estimation between an extremely peaked Gaussian p (σ = 10−6) and a broad Gaussian q (σ = 1) using a single-parameter quadratic classifier (as detailed in section 4.1). Left: A log-log scale plot of the densities and their ratio. Note that p(x) is not visible, since the ratio overlaps it. Right: the solid blue line is the finite-sample logistic loss (Eq. 2) for 10,000 samples. Despite the large sample size, the minimiser (dotted blue line) is far from optimal (dotted black line). The dotted red line is the newly introduced TRE solution, which almost perfectly overlaps with the dotted black line. p q = p p1 × p1 p2 × p2 p3 × p3 q 100 100 x 0 106 de ns ity /ra tio v al ue p(x) p1(x) p(x) p1(x) 10 12 14 0 0.0 1.5 lo gi st ic lo ss n 0( 0) 0 * 0 100 100 x 0 103 p1(x) p2(x) p1(x) p2(x) 6 7 8 9 1 n 1( 1) 1 * 1 100 100 x 0 101 p2(x) p3(x) p2(x) p3(x) 3 4 5 2 n 2( 2) 2 * 2 100 100 x 0 101 p3(x) q(x) p3(x) q(x) 0.5 1.0 1.5 2.0 3 n 3( 3) 3 * 3 (b) Telescoping density-ratio estimation applied to the problem in (a), using the same 10,000 samples from p and q. Top row: a collection of ratios, where p1, p2 and p3 are constructed by deterministically interpolating between samples from p and q. Bottom row: the logistic loss function for each ratio estimation problem. Observe that the finite-sample minimisers of each objective (red dotted lines) are either close to or exactly overlapping their optima (black dotted lines). After estimating each ratio, we then combine them by taking their product. Figure 1: Illustration of standard density-ratio estimation vs. telescoping density-ratio estimation. chain, as illustrated in the top row of Figure 1b. Unlike the original ratio p/q, these ‘chained ratios’ can be accurately estimated via classification (see bottom row). Finally, we combine the chained ratios via a telescoping product to obtain an estimate of the original density-ratio p/q. Thus, we refer to the method as Telescoping density-Ratio Estimation (TRE). We empirically demonstrate that TRE can accurately estimate density-ratios using deep neural networks on high-dimensional problems, significantly outperforming existing single-ratio methods. We show this for two important applications: representation learning via mutual information (MI) estimation and the learning of energy-based models (EBMs). In the context of mutual information estimation, we show that TRE can accurately estimate large MI values of 30+ nats, which is recognised to be an outstanding problem in the literature [52]. However, obtaining accurate MI estimates is often not our sole objective; we also care about learning representations from e.g. audio or image data that are useful for downstream tasks such as classification or clustering. To this end, our experimental results for representation learning confirm that TRE offers substantial gains over a range of existing single-ratio baselines. In the context of energy-based modelling, we show that TRE can be viewed as an extension of noisecontrastive estimation [22] that more efficiently scales to high-dimensional data. Whilst energy-based modelling has been a topic of interest in the machine learning community for some time [56], there has been a recent surge of interest, with a wave of new methods for learning deep EBMs in high dimensions [10, 6, 58, 38, 17, 68]. These methods have shown promising results for image and 3D shape synthesis [66], hybrid modelling [16], and modelling of exchangeable data [67]. However, many of these methods result in expensive/challenging optimisation problems, since they rely on approximate Markov chain Monte Carlo (MCMC) sampling during learning [10, 16, 68], or on adversarial optimisation [6, 17, 68]. In contrast, TRE requires no MCMC during learning and uses a well-defined, non-adversarial, objective function. Moreover, as we show in our mutual information experiments, TRE is applicable to discrete data, whereas all other recent EBM methods only work for continuous random variables. Applicability to discrete data makes TRE especially promising for domains such as natural language processing, where noise-contrastive estimation has been widely used [42, 35, 1]. 2 Discriminative ratio estimation and the density-chasm problem Suppose p and q are two densities for which we have samples, and that q(x) > 0 whenever p(x) > 0. We can estimate the density-ratio r(x) = p(x)/q(x) by training a classifier to distinguish samples from p and q [23, 60, 22]. There are many choices for the loss function of the classifier [60, 51, 21, 41, 52], but in this paper we concentrate on the widely used logistic loss L(θ) = −Ex1∼p log ( r(x1;θ) 1 + r(x1;θ) ) − Ex2∼q log ( 1 1 + r(x2;θ) ) , (1) where r(x;θ) is a non-negative ratio estimating model. To enforce non-negativity, r is typically expressed as the exponential of an unconstrained function such as a neural network. For a correctly specified model, the minimiser of this loss, θ∗, satisfies r(x;θ∗) = p(x)/q(x), without needing any normalisation constraints [22]. Other classification losses do not always have this self-normalising property, but only yield an estimate proportional to the true ratio—see e.g. [52]. The density-chasm problem We experimentally find that density-ratio estimation via classification typically works well when p and q are ‘close’ e.g. the KL divergence between them is less than ∼ 20 nats. However, for sufficiently large gaps, which we refer to as density-chasms, the ratio estimator is often severely inaccurate. This raises the obvious question: what is the cause of such inaccuracy? There are many possible sources of error: the use of misspecified models, imperfect optimisation algorithms, and inaccuracy stemming from Monte Carlo approximations of the expectations in (1). We argue that this mundane final point—Monte Carlo error due to finite sample size—is actually sufficient for inducing the densitychasm problem. Figure 1a depicts a toy problem for which the model is well-specified, and because it is 1-dimensional (w.r.t. θ), optimisation is straightforward using grid-search. And yet, if we use a sample size of n = 10, 000 and minimise the finite-sample loss Ln(θ) = n∑ i=1 − log ( r(xi1; θ) 1 + r(xi1; θ) ) − log ( 1 1 + r(xi2; θ) ) , xi1 ∼ p, xi2 ∼ q, (2) we obtain an estimate θ̂ that is far from the asymptotic minimiser θ∗ = arg min L(θ). Repeating this same experiment for different sample sizes, we can empirically measure the method’s sample efficiency, which is plotted as the blue curve in Figure 2. For the regime plotted, we see that an exponential increase in sample size only yields a linear decrease in estimation error. This empirical result is concordant with theoretical findings that density-ratio based lower bounds on KL divergences are only tight for sample sizes exponential in the the number of nats [40]. Whilst we focus on the logistic loss, we believe the density chasm problem is a broader phenomenon. As shown in the appendix, the issues identified in Figure 1 and the sample inefficiency seen in Figure 2 also occur for other commonly used discriminative loss functions. Thus, when faced with the density-chasm problem, simply increasing the sample size is a highly inefficient solution and not always possible in practice. This begs the question: is there a more intelligent way of using a fixed set of samples from p and q to estimate the ratio? 3 Telescoping density-ratio estimation We introduce a new framework for estimating density-ratios p/q that can overcome the densitychasm problem in a sample-efficient manner. Intuitively, the density-chasm problem arises whenever classifying between p and q is ‘too easy’. This suggests that it may be fruitful to decompose the task into a collection of harder sub-tasks. For convenience, we make the notational switch p ≡ p0, q ≡ pm (which we will keep going forward), and expand the ratio via a telescoping product p0(x) pm(x) = p0(x) p1(x) p1(x) p2(x) . . . pm−2(x) pm−1(x) pm−1(x) pm(x) , (3) where, ideally, each pk is chosen such that a classifier cannot easily distinguish it from its two neighbouring densities. Instead of attempting to build one large ‘bridge’ (i.e. density-ratio) across the density-chasm, we propose to build many small bridges between intermediate ‘waymark’ distributions. The two key components of the method are therefore: 1. Waymark creation. We require a method for gradually transporting samples {x10, . . . ,xn0} from p0 to samples {x1m, . . . ,xnm} from pm. At each step in the transportation, we obtain a new dataset {x1k, . . . ,xnk} where k ∈ {0, . . .m}. Each intermediate dataset can be thought of as samples from an implicit distribution pk, which we refer to as a waymark distribution. 2. Bridge-building: A method for learning a set of parametrised density-ratios between consecutive pairs of waymarks rk(x;θk) ≈ pk(x)/pk+1(x) for k = 0, . . . ,m− 1, where each bridge rk is a non-negative function. We refer to these ratio estimating models as bridges. Note that the parameters of the bridges, {θk}m−1k=0 , can be totally independent or they can be partially shared. An estimate of the original ratio is then given by the product of the bridges r(x;θ) = m−1∏ k=0 rk(x;θk) ≈ m−1∏ k=0 pk(x) pk+1(x) = p0(x) pm(x) , (4) where θ is the concatenation of all θk vectors. Because of the telescoping product in (4), we refer to the method as Telescoping density-Ratio Estimation (TRE). TRE has conceptual ties with a range of methods in optimisation, statistical physics and machine learning that leverage sequences of intermediate distributions, typically between a complex density p and a simple tractable density q. Of particular note are the methods of Simulated Annealing [32], Bridge Sampling & Path Sampling [14] and Annealed Importance Sampling (AIS) [45]. Whilst none of these methods estimate density ratios, and thus serve fundamentally different purposes, they leverage similar ideas. In particular, AIS also computes a chain of density-ratios between artificially constructed intermediate distributions. It typically does this by first defining explicit expressions for the intermediate densities, and then trying to obtain samples via MCMC. In contrast, TRE implicitly defines the intermediate distributions via samples and then tries to learn the ratios. Additionally, in TRE we would like to evaluate the learned ratios in (4) at the same input x while AIS should only evaluate a ratio rk at ‘local’ samples from e.g. pk. 3.1 Waymark creation In this paper, we consider two simple, deterministic waymark creation mechanisms: linear combinations and dimension-wise mixing. We find these mechanisms yield good performance and are computationally cheap. However, we note that other mechanisms are possible, and are a promising topic for future work. Linear combinations. Given a random pair x0 ∼ p0 and xm ∼ pm, define the kth waymark via xk = √ 1− α2k x0 + αkxm, k = 0, . . . ,m (5) where the αk form an increasing sequence from 0 to 1, which control the distance of xk from x0. For all of our experiments (except, for illustration purposes, those depicted in Figure 1), each dimension of p0 and pm has the same variance2 and the coefficients in (5) are chosen to preserve this variance, with the goal being to match basic properties of the waymarks and thereby make consecutive classification problems harder. Dimension-wise mixing. An alternative way to ‘mix’ two vectors is to concatenate different subsets of their dimensions. Given a d-length vector x, we can partition it into m sub-vectors of length d/m, assuming d is divisible by m. We denote this as x = (x[1], . . . ,x[m]), where each x[i] has length d/m. Using this notation, define the kth waymark via xk = (xm[1], . . . , xm[k], x0[k + 1], . . . , x0[m]) k = 0, . . . ,m (6) where, again, x0 ∼ p0 and xm ∼ pm are randomly paired. Number and spacing. Given these two waymark generation mechanisms, we still need to decide the number of waymarks, m, and, in the case of linear combinations, how the αk are spaced in the unit interval. We treat these quantities as hyperparameters, and demonstrate in the experiments (Section 4) that tuning them is feasible with a limited search budget. 3.2 Bridge-building Each bridge rk(x;θk) in (4) can be learned via binary classification using a logistic loss function as described in Section 2. Solving this collection of classification tasks is therefore a multi-task learning (MTL) problem—see [55] for a review. Two key questions in MTL are how to share parameters and how to define a joint objective function. Parameter sharing. We break the construction of the bridges rk(x;θk) into two stages: a (mostly) shared body computing hidden vectors fk(x)3, followed by bridge-specific heads. The body fk is a deep neural network with shared parameters and pre-activation per-hidden-unit scales and biases for each bridge (see appendix for details). Similar parameter sharing schemes have been successfully used in the multi-task learning literature [7, 11]. The heads map the hidden vectors fk(x) to the scalar log rk(x;θk). We use either linear or quadratic mappings depending on the application; the precise parameterisation is stated in each experiment section. TRE loss function. The TRE loss function is given by the average of the m logistic losses LTRE(θ) = 1 m m−1∑ k=0 Lk(θk), (7) Lk(θk) = −Exk∼pk log ( rk(xk;θk) 1 + rk(xk;θk) ) − Exk+1∼pk+1 log ( 1 1 + rk(xk+1;θk) ) . (8) This simple unweighted average works well empirically. More sophisticated multi-task weighting schemes exist [5], but preliminary experiments suggested they were not worth the extra complexity. An important aspect of this loss function is that each ratio estimator rk sees different samples during training. In particular, r0 sees samples close to the real data i.e. from p0 and p1, while the final ratio rm−1 sees data from pm−1 and pm. This creates a potential mismatch between training and deployment, since after learning, we would like to evaluate all ratios at the same input x. In our experiments, we do not find this mismatch to be a problem, suggesting that each ratio, despite seeing different inputs during training, is able to generalise to new test points. We speculate that this generalisation is encouraged by parameter sharing, which allows each ratio-estimator to be indirectly influenced by samples from all waymark distributions. Nevertheless, we think a deeper analysis of this issue of generalisation deserves further work. 3.3 TRE applied to mutual information estimation The mutual information (MI) between two random variables u and v can be written as I(u,v) = Ep(u,v) [ log r(u,v) ] , r(u,v) = p(u,v) p(u)p(v) . (9) 2For MI estimation this always holds, for energy-based modelling this is enforceable via the choice of pm. 3For simplicity, we suppress the parameters of fk, and will do the same for rk in the experiments section. Given samples from the joint density p(u,v), one obtains samples from the product-of-marginals p(u)p(v) by shuffling the v vectors across the dataset. This then enables standard density-ratio estimation to be performed. For TRE, we require waymark samples. To generate these, we take a sample from the joint, x0 = (u,v0), and a sample from the product-of-marginals, xm = (u,vm), where u is held fixed and only v is altered. We then apply a waymark construction mechanism from Section 3.1 to generate xk = (u,vk), for k = 0, . . . ,m. 3.4 TRE applied to energy-based modelling An energy-based model (EBM) is a flexible parametric family {φ(x;θ)} of non-negative functions, where each function is proportional to a probability-density. Given samples from a data distribution with density p(x), the goal of energy-based modelling is to find a parameter θ∗ such that φ(x;θ∗) is ‘close’ to cp(x), for some positive constant c. In this paper, we consider EBMs of the form φ(x;θ) = r(x;θ)q(x), where q is a known density (e.g. a Gaussian or normalising flow) that we can sample from, and r is an unconstrained positive function. Given this parameterisation, the optimal r simply equals the density-ratio p(x)/q(x), and hence the problem of learning an EBM becomes the problem of estimating a density-ratio, which can be solved via TRE. We note that, since TRE actually estimates a product of ratios as stated in Equation 4, the final EBM will be a product-of-experts model [26] of the form φ(x;θ) = ∏m−1 k=0 rk(x;θk)q(x). The estimation of EBMs via density-ratio estimation has been studied in multiple prior works, including noise-contrastive estimation (NCE) [22], which has many appealing theoretical properties [22, 54, 65]. Following NCE, we will refer to the known density q as the ‘noise distribution’. 4 Experiments We include two toy examples illustrating both the correctness of TRE and the fact that it can solve problems which verge on the intractable for standard density ratio estimation. We then demonstrate the utility of TRE on two high-dimensional complex tasks, providing clear evidence that it substantially improves on standard single-ratio baselines. For experiments with continuous random variables, we use the linear combination waymark mechanisms in (5); otherwise, for discrete variables, we use dimension-wise mixing (6). For the linear combination mechanism, we collapse the αk into a single spacing hyperparameter, and grid-search over this value, along with the number of waymarks. Details are in the appendix. 4.1 1d peaked ratio The basic setup is stated in Figure 1a. For TRE, we use quadratic bridges of the form log rk(x) = wkx 2 + bk, where bk is set to its ground truth value (as derived in appendix), and wk is reparametrised as exp(θk) to avoid unnecessary log-scales in Figure 1. The single ratio-estimation results use the same parameterisation (dropping the subscript k). Figure 2 shows the full results. These sample efficiency curves clearly demonstrate that, across all sample sizes, TRE is significantly more accurate than single ratio estimation. In fact, TRE obtains a better solution with 100 samples than single-ratio estimation does with 100,000 samples: a three orders of magnitude improvement. 4.2 High-dimensional ratio with large MI This toy problem has been widely used in the mutual information literature [2, 52]. Let x ∈ R2d be a Gaussian random variable, with block-diagonal covariance matrix, where each block is 2×2 with 1 on the diagonal and 0.8 on the off-diagonal. We then estimate the ratio between this Gaussian and a standard normal distribution. This problem can be viewed as an MI estimation task or an energy-based modelling task—see the appendix for full details. We apply TRE using quadratic bridges of the form: log rk(x) = xTWkx+ bk. The results in Figure 3 show that single ratio estimation becomes severely inaccurate for MI values greater than 20 nats. In contrast, TRE can accurately estimate MI values as large as 80 nats for 320 dimensional variables. To our knowledge, TRE is the first discriminative MI estimation method that can scale this gracefully. 4.3 MI estimation & representation learning on SpatialMultiOmniglot We applied TRE to the SpatialMultiOmniglot problem taken from [49]4 where characters from Omniglot are spatially stacked in an n× n grid, where each grid position contains characters from a fixed alphabet. Following [49], the individual pixel values of the characters are not considered random variables; rather, we treat the grid as a collection of n2 categorical random variables whose realisations are the characters from the respective alphabet. Pairs of grids, (u,v), are then formed such that corresponding grid-positions contain alphabetically consecutive characters. Given this setup, the ground truth MI can be calculated (see appendix). Each bridge in TRE uses a separable architecture [52] given by log rk(u,v) = g(u)TWkfk(v), where g and fk are 14-layer convolutional ResNets [24] and fk uses the parameter-sharing scheme described in Section 3.2. We note that separable architectures are standard in the MI-based representation learning literature [52]. We construct waymarks using the dimension-wise mixing mechanism (6) with m = n2 (i.e. one dimension is mixed at a time). After learning, we adopt a standard linear evaluation protocol (see e.g. [48]), where we train supervised linear classifiers on top of the output layer g(u) to predict the alphabetic position of each character in u. We compare our results to those reported in [49]. Specifically, we report their baseline method— contrastive predictive coding (CPC) [48], a state-of-the-art representation learning method based on single density-ratio estimation—along with their variant, Wasserstein predictive coding (WPC). Figure 4 shows the results. The left plot shows that only TRE can accurately estimate high MI values of ∼ 35 nats5. The representation learning results (right) show that all single density-ratio baselines degrade significantly in performance as we increase the number of characters in a grid (and hence increase the MI). In contrast, TRE always obtains greater than 97% accuracy. 4.4 Energy-based modelling on MNIST As explained in Section 3.4, TRE can be used estimate an energy-based model of the form φ(x;θ) =∏m−1 k=0 rk(x;θk)q(x), where q is a pre-specified ‘noise’ distribution from which we can sample, and the product of ratios is given by TRE. In this section, we demonstrate that such an approach can 4We mirror their experimental setup as accurately as possible, however we were unable to obtain their code. 5[49] do not provide MI estimates for CPC & WPC, but [52] shows that they are bounded by log batch-size. Gaussian Copula RQ-NSF Figure 5: MNIST samples. Each row pertains to a particular noise distribution. The first block shows exact samples from that distribution. The second & third blocks show MCMC samples from an EBM learned with NCE & TRE, respectively. scale to high-dimensional data, by learning energy-based models of the MNIST handwritten digit dataset [37]. We consider three choices of the noise distribution: a multivariate Gaussian, a Gaussian copula and a rational-quadratic neural spline flow (RQ-NSF) [12] with coupling layers [9, 31]. Each distribution is first fitted to the data via maximum likelihood estimation—see appendix for details. Each of these noise distributions can be expressed as an invertible transformation of a standard normal distribution. That is, each random variable has the form F (z), where z ∼ N (0, I). Since F already encodes useful information about the data distribution, it makes sense to leverage this when constructing the waymarks in TRE. Specifically, we can generate linear combination waymarks via (5) in z-space, and then map them back to x-space, giving xk = F ( √ 1− α2k F −1(x0) + αkF −1(xm)). (10) For a Gaussian, F is linear, and hence (10) is identical to the original waymark mechanism in (5). We use the parameter sharing scheme from Section 3.2 together with quadratic heads. This gives log rk(x) = −fk(x)TWkfk(x)− fk(x)Tbk − ck, where we set fk to be an 18-layer convolutional Resnet and constrain Wk to be positive definite. This constraint enforces an upper limit on the log-density of the EBM, which has been useful in other work [44, 46], and improves results here. We evaluate the learned EBMs quantitatively via estimated log-likelihood in Table 1 and qualitatively via random samples from the model in Figure 5. For both of these evaluations, we employ NUTS [28] to perform annealed MCMC sampling as explained in the appendix. This annealing procedure provides two estimators of the log-likelihood: the Annealed Importance Sampling (AIS) estimator [45] and the more conservative Reverse Annealed Importance Sampling Estimator (RAISE) [3]. The results in Table 1 and Figure 5 show that single ratio estimation performs poorly in highdimensions for simple choices of the noise distribution, and only works well if we use a complex neural density-estimator (RQ-NSF). This illustrates the density-chasm problem explained in Section 2. In contrast, TRE yields improvements for all choices of the noise, as measured by the approximate log-likelihood and the visual fidelity of the samples. TRE’s improvement over the Gaussian noise distribution is particularly large: the bits per dimension (bpd) is around 0.66 lower, corresponding to an improvement of roughly 360 nats. Moreover, the samples are significantly more coherent, and appear to be of higher fidelity than the RQ-NSF samples6, despite the fact that TRE (with Gaussian noise) has a worse log-likelihood. This final point is not contradictory since log-likelihood and sample quality are known to be only loosely connected [61]. 6We emphasise here that the quality of the RQ-NSF model depends on the exact architecture. A larger model may yield better samples. Thus, we do not claim that TRE generally yields superior results in any sense. Finally, we analysed the sensitivity of our results to the construction of the waymarks and include the results in the appendix. Using TRE with a copula noise distribution as an illustrative case, we found that varying the number of waymarks between 5-30 caused only minor changes in the approximate log-likelihoods, no greater than 0.03 bpd. We also found that if we omit the z-space waymark mechanism in (10), and work in x-space, then TRE’s negative log-likelihood increases to 1.33 bpd, as measured by RAISE. This is still significantly better than single-ratio estimation, but does show that the quality of the results depends on the exact waymark mechanism. 5 Conclusion We introduced a new framework—Telescoping density-Ratio Estimation (TRE)—for learning densityratios that, unlike existing discriminative methods, can accurately estimate ratios between extremely different densities in high-dimensions. TRE admits many exciting directions for future work. Firstly, we would like a deeper theoretical understanding of why it is so much more sample-efficient than standard density-ratio estimation. The relationship between TRE and standard methods is structurally similar to the relationship between annealed importance sampling and standard importance sampling. Thus, exploring this connection further may be fruitful. Relatedly, we believe that TRE would benefit from further research on waymark mechanisms. We presented simple mechanisms that have clear utility for both discrete and continuous-valued data. However, we suspect more sophisticated choices may yield improvements, especially if one can leverage domain or task-specific assumptions to intelligently decompose the density-ratio problem. Lastly, whilst this paper has focused on the logistic loss, it would be interesting to more deeply investigate TRE with other discriminative loss functions. Broader Impact As outlined in the introduction, density-ratio estimation is a foundational tool in machine learning with diverse applications. Our work, which improves density-ratio estimation, may therefore increase the scope and power of a wide spectrum of techniques used both in research and real-world settings. The broad utility of our contribution makes it challenging to concretely assess the societal impact of the work. However, we do discuss here two applications of density-ratio estimation with obvious potential for positive & negative impacts on society. Generative Adversarial Networks [15] are a popular class of models which are often trained via density-ratio estimation and are able to generate photo-realistic image/video content. To the extent that TRE can enhance GAN training (a topic we do not treat in this paper), our work could conceivably lead to enhanced ‘deepfakes’, which can be maliciously used in fake-news or identity fraud. More positively, density-ratio estimation is being used to correct for dataset bias, including the presence of skewed demographic factors like race and gender [18]. While we are excited about such applications, we emphasise that density-ratio based methods are not a panacea; it is entirely possible for the technique to introduce new biases when correcting for existing ones. Future work should continue to be mindful of such a possibility, and look for ways to address the issue if it arises. Acknowledgments and Disclosure of Funding Benjamin Rhodes was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh. Kai was supported by Edinburgh Huawei Research Lab in the University of Edinburgh, funded by Huawei Technologies Co. Ltd.
1. What is the focus and contribution of the paper on density-ratio estimation? 2. What are the strengths of the proposed approach, particularly in terms of its effectiveness and efficiency? 3. What are the weaknesses of the paper, especially regarding its design choices and experimental analysis? 4. Do you have any concerns about the generalizability of the results? 5. Why did the authors not provide comparisons with other models besides "single ratio" estimations?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper targets the problem of density-ratio estimation via discriminative classification for cases of large differences in the densities. In these cases, classification is easy enough s.t. discriminative classifiers do not have to represent the densities accurately, thus leading to high errors when used for density ratio estimation. The paper proposes a telescoping mechanism where one density is stepwise transformed towards the other. Classification between these very similar intermediate densities is much harder and thus the discriminative classifier is forced towards more accurate density estimates. Strengths The idea of the paper seems both straightforward and effective, which is interesting. The experiments show estimates that are close to the ground-truth even for `problematic`data. Fig. 4 (right) shows strong benefits of the introduced approach compared to a very recent NeurIPS paper. Interestingly the required number of samples decreases significantly, since it is no longer important to acquire samples from low density regions (which takes an exponentially increasing number of samples when assuming an exponential model as in Fig.2). Weaknesses Many design choices appear quite ad hoc. With little analytical guarantees and only few experiments (most of them involving variants of Gaussian noise) it is hard to assess how generalizable the results are. No comparisons to models other than "single ratio" estimations are given. It raises the question, why this problem was never addressed before, or if it was addressed before why there is no comparison.
NIPS
Title Telescoping Density-Ratio Estimation Abstract Density-ratio estimation via classification is a cornerstone of unsupervised learning. It has provided the foundation for state-of-the-art methods in representation learning and generative modelling, with the number of use-cases continuing to proliferate. However, it suffers from a critical limitation: it fails to accurately estimate ratios p/q for which the two densities differ significantly. Empirically, we find this occurs whenever the KL divergence between p and q exceeds tens of nats. To resolve this limitation, we introduce a new framework, telescoping density-ratio estimation (TRE), that enables the estimation of ratios between highly dissimilar densities in high-dimensional spaces. Our experiments demonstrate that TRE can yield substantial improvements over existing single-ratio methods for mutual information estimation, representation learning and energy-based modelling. 1 Introduction Unsupervised learning via density-ratio estimation is a powerful paradigm in machine learning [60] that continues to be a source of major progress in the field. It consists of estimating the ratio p/q from their samples without separately estimating the numerator and denominator. A common way to achieve this is to train a neural network classifier to distinguish between the two sets of samples, since for many loss functions the ratio p/q can be extracted from the optimal classifier [60, 21, 41]. This discriminative approach has been leveraged in diverse areas such as covariate shift adaptation [59, 63], energy-based modelling [22, 4, 53, 64, 36, 19], generative adversarial networks [15, 47, 43], bias correction for generative models [20, 18], likelihood-free inference [50, 62, 8, 13], mutualinformation estimation [2], representation learning [29, 30, 48, 25, 27], Bayesian experimental design [33, 34] and off-policy reward estimation in reinforcement learning [39]. Across this diverse set of applications, density-ratio based methods have consistently yielded state-of-the-art results. Despite the successes of discriminative density-ratio estimation, many existing loss functions share a severe limitation. Whenever the ‘gap’ between p and q is large, the classifier can obtain almost perfect accuracy with a relatively poor estimate of the density ratio. We refer to this failure mode as the density-chasm problem—see Figure 1a for an illustration. We observe empirically that the density-chasm problem manifests whenever the KL-divergence DKL(p ‖ q) exceeds ∼ 20 nats1. This observation accords with recent findings in the mutual information literature regarding the limitations of density-ratio based estimators of the KL [40, 52, 57]. In high dimensions, it can easily occur that two densities p and q will have a KL-divergence measuring in the hundreds of nats, and so the ratio may be virtually intractable to estimate with existing techniques. In this paper, we propose a new framework for estimating density-ratios that can overcome the density-chasm problem. Our solution uses a ‘divide-and-conquer’ strategy composed of two steps. The first step is to gradually transport samples from p to samples from q, creating a chain of intermediate datasets. We then estimate the density-ratio between consecutive datasets along this 1‘nat’ being a unit of information measured using the natural logarithm (base e) 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 100 10 1 10 2 0 10 2 10 1 100 x 0 10 1 101 103 105 de ns ity /ra tio v al ue p(x) q(x) p(x) q(x) 4 6 8 10 12 14 0.0 0.5 1.0 1.5 lo gi st ic lo ss n( ) * TRE (a) Density-ratio estimation between an extremely peaked Gaussian p (σ = 10−6) and a broad Gaussian q (σ = 1) using a single-parameter quadratic classifier (as detailed in section 4.1). Left: A log-log scale plot of the densities and their ratio. Note that p(x) is not visible, since the ratio overlaps it. Right: the solid blue line is the finite-sample logistic loss (Eq. 2) for 10,000 samples. Despite the large sample size, the minimiser (dotted blue line) is far from optimal (dotted black line). The dotted red line is the newly introduced TRE solution, which almost perfectly overlaps with the dotted black line. p q = p p1 × p1 p2 × p2 p3 × p3 q 100 100 x 0 106 de ns ity /ra tio v al ue p(x) p1(x) p(x) p1(x) 10 12 14 0 0.0 1.5 lo gi st ic lo ss n 0( 0) 0 * 0 100 100 x 0 103 p1(x) p2(x) p1(x) p2(x) 6 7 8 9 1 n 1( 1) 1 * 1 100 100 x 0 101 p2(x) p3(x) p2(x) p3(x) 3 4 5 2 n 2( 2) 2 * 2 100 100 x 0 101 p3(x) q(x) p3(x) q(x) 0.5 1.0 1.5 2.0 3 n 3( 3) 3 * 3 (b) Telescoping density-ratio estimation applied to the problem in (a), using the same 10,000 samples from p and q. Top row: a collection of ratios, where p1, p2 and p3 are constructed by deterministically interpolating between samples from p and q. Bottom row: the logistic loss function for each ratio estimation problem. Observe that the finite-sample minimisers of each objective (red dotted lines) are either close to or exactly overlapping their optima (black dotted lines). After estimating each ratio, we then combine them by taking their product. Figure 1: Illustration of standard density-ratio estimation vs. telescoping density-ratio estimation. chain, as illustrated in the top row of Figure 1b. Unlike the original ratio p/q, these ‘chained ratios’ can be accurately estimated via classification (see bottom row). Finally, we combine the chained ratios via a telescoping product to obtain an estimate of the original density-ratio p/q. Thus, we refer to the method as Telescoping density-Ratio Estimation (TRE). We empirically demonstrate that TRE can accurately estimate density-ratios using deep neural networks on high-dimensional problems, significantly outperforming existing single-ratio methods. We show this for two important applications: representation learning via mutual information (MI) estimation and the learning of energy-based models (EBMs). In the context of mutual information estimation, we show that TRE can accurately estimate large MI values of 30+ nats, which is recognised to be an outstanding problem in the literature [52]. However, obtaining accurate MI estimates is often not our sole objective; we also care about learning representations from e.g. audio or image data that are useful for downstream tasks such as classification or clustering. To this end, our experimental results for representation learning confirm that TRE offers substantial gains over a range of existing single-ratio baselines. In the context of energy-based modelling, we show that TRE can be viewed as an extension of noisecontrastive estimation [22] that more efficiently scales to high-dimensional data. Whilst energy-based modelling has been a topic of interest in the machine learning community for some time [56], there has been a recent surge of interest, with a wave of new methods for learning deep EBMs in high dimensions [10, 6, 58, 38, 17, 68]. These methods have shown promising results for image and 3D shape synthesis [66], hybrid modelling [16], and modelling of exchangeable data [67]. However, many of these methods result in expensive/challenging optimisation problems, since they rely on approximate Markov chain Monte Carlo (MCMC) sampling during learning [10, 16, 68], or on adversarial optimisation [6, 17, 68]. In contrast, TRE requires no MCMC during learning and uses a well-defined, non-adversarial, objective function. Moreover, as we show in our mutual information experiments, TRE is applicable to discrete data, whereas all other recent EBM methods only work for continuous random variables. Applicability to discrete data makes TRE especially promising for domains such as natural language processing, where noise-contrastive estimation has been widely used [42, 35, 1]. 2 Discriminative ratio estimation and the density-chasm problem Suppose p and q are two densities for which we have samples, and that q(x) > 0 whenever p(x) > 0. We can estimate the density-ratio r(x) = p(x)/q(x) by training a classifier to distinguish samples from p and q [23, 60, 22]. There are many choices for the loss function of the classifier [60, 51, 21, 41, 52], but in this paper we concentrate on the widely used logistic loss L(θ) = −Ex1∼p log ( r(x1;θ) 1 + r(x1;θ) ) − Ex2∼q log ( 1 1 + r(x2;θ) ) , (1) where r(x;θ) is a non-negative ratio estimating model. To enforce non-negativity, r is typically expressed as the exponential of an unconstrained function such as a neural network. For a correctly specified model, the minimiser of this loss, θ∗, satisfies r(x;θ∗) = p(x)/q(x), without needing any normalisation constraints [22]. Other classification losses do not always have this self-normalising property, but only yield an estimate proportional to the true ratio—see e.g. [52]. The density-chasm problem We experimentally find that density-ratio estimation via classification typically works well when p and q are ‘close’ e.g. the KL divergence between them is less than ∼ 20 nats. However, for sufficiently large gaps, which we refer to as density-chasms, the ratio estimator is often severely inaccurate. This raises the obvious question: what is the cause of such inaccuracy? There are many possible sources of error: the use of misspecified models, imperfect optimisation algorithms, and inaccuracy stemming from Monte Carlo approximations of the expectations in (1). We argue that this mundane final point—Monte Carlo error due to finite sample size—is actually sufficient for inducing the densitychasm problem. Figure 1a depicts a toy problem for which the model is well-specified, and because it is 1-dimensional (w.r.t. θ), optimisation is straightforward using grid-search. And yet, if we use a sample size of n = 10, 000 and minimise the finite-sample loss Ln(θ) = n∑ i=1 − log ( r(xi1; θ) 1 + r(xi1; θ) ) − log ( 1 1 + r(xi2; θ) ) , xi1 ∼ p, xi2 ∼ q, (2) we obtain an estimate θ̂ that is far from the asymptotic minimiser θ∗ = arg min L(θ). Repeating this same experiment for different sample sizes, we can empirically measure the method’s sample efficiency, which is plotted as the blue curve in Figure 2. For the regime plotted, we see that an exponential increase in sample size only yields a linear decrease in estimation error. This empirical result is concordant with theoretical findings that density-ratio based lower bounds on KL divergences are only tight for sample sizes exponential in the the number of nats [40]. Whilst we focus on the logistic loss, we believe the density chasm problem is a broader phenomenon. As shown in the appendix, the issues identified in Figure 1 and the sample inefficiency seen in Figure 2 also occur for other commonly used discriminative loss functions. Thus, when faced with the density-chasm problem, simply increasing the sample size is a highly inefficient solution and not always possible in practice. This begs the question: is there a more intelligent way of using a fixed set of samples from p and q to estimate the ratio? 3 Telescoping density-ratio estimation We introduce a new framework for estimating density-ratios p/q that can overcome the densitychasm problem in a sample-efficient manner. Intuitively, the density-chasm problem arises whenever classifying between p and q is ‘too easy’. This suggests that it may be fruitful to decompose the task into a collection of harder sub-tasks. For convenience, we make the notational switch p ≡ p0, q ≡ pm (which we will keep going forward), and expand the ratio via a telescoping product p0(x) pm(x) = p0(x) p1(x) p1(x) p2(x) . . . pm−2(x) pm−1(x) pm−1(x) pm(x) , (3) where, ideally, each pk is chosen such that a classifier cannot easily distinguish it from its two neighbouring densities. Instead of attempting to build one large ‘bridge’ (i.e. density-ratio) across the density-chasm, we propose to build many small bridges between intermediate ‘waymark’ distributions. The two key components of the method are therefore: 1. Waymark creation. We require a method for gradually transporting samples {x10, . . . ,xn0} from p0 to samples {x1m, . . . ,xnm} from pm. At each step in the transportation, we obtain a new dataset {x1k, . . . ,xnk} where k ∈ {0, . . .m}. Each intermediate dataset can be thought of as samples from an implicit distribution pk, which we refer to as a waymark distribution. 2. Bridge-building: A method for learning a set of parametrised density-ratios between consecutive pairs of waymarks rk(x;θk) ≈ pk(x)/pk+1(x) for k = 0, . . . ,m− 1, where each bridge rk is a non-negative function. We refer to these ratio estimating models as bridges. Note that the parameters of the bridges, {θk}m−1k=0 , can be totally independent or they can be partially shared. An estimate of the original ratio is then given by the product of the bridges r(x;θ) = m−1∏ k=0 rk(x;θk) ≈ m−1∏ k=0 pk(x) pk+1(x) = p0(x) pm(x) , (4) where θ is the concatenation of all θk vectors. Because of the telescoping product in (4), we refer to the method as Telescoping density-Ratio Estimation (TRE). TRE has conceptual ties with a range of methods in optimisation, statistical physics and machine learning that leverage sequences of intermediate distributions, typically between a complex density p and a simple tractable density q. Of particular note are the methods of Simulated Annealing [32], Bridge Sampling & Path Sampling [14] and Annealed Importance Sampling (AIS) [45]. Whilst none of these methods estimate density ratios, and thus serve fundamentally different purposes, they leverage similar ideas. In particular, AIS also computes a chain of density-ratios between artificially constructed intermediate distributions. It typically does this by first defining explicit expressions for the intermediate densities, and then trying to obtain samples via MCMC. In contrast, TRE implicitly defines the intermediate distributions via samples and then tries to learn the ratios. Additionally, in TRE we would like to evaluate the learned ratios in (4) at the same input x while AIS should only evaluate a ratio rk at ‘local’ samples from e.g. pk. 3.1 Waymark creation In this paper, we consider two simple, deterministic waymark creation mechanisms: linear combinations and dimension-wise mixing. We find these mechanisms yield good performance and are computationally cheap. However, we note that other mechanisms are possible, and are a promising topic for future work. Linear combinations. Given a random pair x0 ∼ p0 and xm ∼ pm, define the kth waymark via xk = √ 1− α2k x0 + αkxm, k = 0, . . . ,m (5) where the αk form an increasing sequence from 0 to 1, which control the distance of xk from x0. For all of our experiments (except, for illustration purposes, those depicted in Figure 1), each dimension of p0 and pm has the same variance2 and the coefficients in (5) are chosen to preserve this variance, with the goal being to match basic properties of the waymarks and thereby make consecutive classification problems harder. Dimension-wise mixing. An alternative way to ‘mix’ two vectors is to concatenate different subsets of their dimensions. Given a d-length vector x, we can partition it into m sub-vectors of length d/m, assuming d is divisible by m. We denote this as x = (x[1], . . . ,x[m]), where each x[i] has length d/m. Using this notation, define the kth waymark via xk = (xm[1], . . . , xm[k], x0[k + 1], . . . , x0[m]) k = 0, . . . ,m (6) where, again, x0 ∼ p0 and xm ∼ pm are randomly paired. Number and spacing. Given these two waymark generation mechanisms, we still need to decide the number of waymarks, m, and, in the case of linear combinations, how the αk are spaced in the unit interval. We treat these quantities as hyperparameters, and demonstrate in the experiments (Section 4) that tuning them is feasible with a limited search budget. 3.2 Bridge-building Each bridge rk(x;θk) in (4) can be learned via binary classification using a logistic loss function as described in Section 2. Solving this collection of classification tasks is therefore a multi-task learning (MTL) problem—see [55] for a review. Two key questions in MTL are how to share parameters and how to define a joint objective function. Parameter sharing. We break the construction of the bridges rk(x;θk) into two stages: a (mostly) shared body computing hidden vectors fk(x)3, followed by bridge-specific heads. The body fk is a deep neural network with shared parameters and pre-activation per-hidden-unit scales and biases for each bridge (see appendix for details). Similar parameter sharing schemes have been successfully used in the multi-task learning literature [7, 11]. The heads map the hidden vectors fk(x) to the scalar log rk(x;θk). We use either linear or quadratic mappings depending on the application; the precise parameterisation is stated in each experiment section. TRE loss function. The TRE loss function is given by the average of the m logistic losses LTRE(θ) = 1 m m−1∑ k=0 Lk(θk), (7) Lk(θk) = −Exk∼pk log ( rk(xk;θk) 1 + rk(xk;θk) ) − Exk+1∼pk+1 log ( 1 1 + rk(xk+1;θk) ) . (8) This simple unweighted average works well empirically. More sophisticated multi-task weighting schemes exist [5], but preliminary experiments suggested they were not worth the extra complexity. An important aspect of this loss function is that each ratio estimator rk sees different samples during training. In particular, r0 sees samples close to the real data i.e. from p0 and p1, while the final ratio rm−1 sees data from pm−1 and pm. This creates a potential mismatch between training and deployment, since after learning, we would like to evaluate all ratios at the same input x. In our experiments, we do not find this mismatch to be a problem, suggesting that each ratio, despite seeing different inputs during training, is able to generalise to new test points. We speculate that this generalisation is encouraged by parameter sharing, which allows each ratio-estimator to be indirectly influenced by samples from all waymark distributions. Nevertheless, we think a deeper analysis of this issue of generalisation deserves further work. 3.3 TRE applied to mutual information estimation The mutual information (MI) between two random variables u and v can be written as I(u,v) = Ep(u,v) [ log r(u,v) ] , r(u,v) = p(u,v) p(u)p(v) . (9) 2For MI estimation this always holds, for energy-based modelling this is enforceable via the choice of pm. 3For simplicity, we suppress the parameters of fk, and will do the same for rk in the experiments section. Given samples from the joint density p(u,v), one obtains samples from the product-of-marginals p(u)p(v) by shuffling the v vectors across the dataset. This then enables standard density-ratio estimation to be performed. For TRE, we require waymark samples. To generate these, we take a sample from the joint, x0 = (u,v0), and a sample from the product-of-marginals, xm = (u,vm), where u is held fixed and only v is altered. We then apply a waymark construction mechanism from Section 3.1 to generate xk = (u,vk), for k = 0, . . . ,m. 3.4 TRE applied to energy-based modelling An energy-based model (EBM) is a flexible parametric family {φ(x;θ)} of non-negative functions, where each function is proportional to a probability-density. Given samples from a data distribution with density p(x), the goal of energy-based modelling is to find a parameter θ∗ such that φ(x;θ∗) is ‘close’ to cp(x), for some positive constant c. In this paper, we consider EBMs of the form φ(x;θ) = r(x;θ)q(x), where q is a known density (e.g. a Gaussian or normalising flow) that we can sample from, and r is an unconstrained positive function. Given this parameterisation, the optimal r simply equals the density-ratio p(x)/q(x), and hence the problem of learning an EBM becomes the problem of estimating a density-ratio, which can be solved via TRE. We note that, since TRE actually estimates a product of ratios as stated in Equation 4, the final EBM will be a product-of-experts model [26] of the form φ(x;θ) = ∏m−1 k=0 rk(x;θk)q(x). The estimation of EBMs via density-ratio estimation has been studied in multiple prior works, including noise-contrastive estimation (NCE) [22], which has many appealing theoretical properties [22, 54, 65]. Following NCE, we will refer to the known density q as the ‘noise distribution’. 4 Experiments We include two toy examples illustrating both the correctness of TRE and the fact that it can solve problems which verge on the intractable for standard density ratio estimation. We then demonstrate the utility of TRE on two high-dimensional complex tasks, providing clear evidence that it substantially improves on standard single-ratio baselines. For experiments with continuous random variables, we use the linear combination waymark mechanisms in (5); otherwise, for discrete variables, we use dimension-wise mixing (6). For the linear combination mechanism, we collapse the αk into a single spacing hyperparameter, and grid-search over this value, along with the number of waymarks. Details are in the appendix. 4.1 1d peaked ratio The basic setup is stated in Figure 1a. For TRE, we use quadratic bridges of the form log rk(x) = wkx 2 + bk, where bk is set to its ground truth value (as derived in appendix), and wk is reparametrised as exp(θk) to avoid unnecessary log-scales in Figure 1. The single ratio-estimation results use the same parameterisation (dropping the subscript k). Figure 2 shows the full results. These sample efficiency curves clearly demonstrate that, across all sample sizes, TRE is significantly more accurate than single ratio estimation. In fact, TRE obtains a better solution with 100 samples than single-ratio estimation does with 100,000 samples: a three orders of magnitude improvement. 4.2 High-dimensional ratio with large MI This toy problem has been widely used in the mutual information literature [2, 52]. Let x ∈ R2d be a Gaussian random variable, with block-diagonal covariance matrix, where each block is 2×2 with 1 on the diagonal and 0.8 on the off-diagonal. We then estimate the ratio between this Gaussian and a standard normal distribution. This problem can be viewed as an MI estimation task or an energy-based modelling task—see the appendix for full details. We apply TRE using quadratic bridges of the form: log rk(x) = xTWkx+ bk. The results in Figure 3 show that single ratio estimation becomes severely inaccurate for MI values greater than 20 nats. In contrast, TRE can accurately estimate MI values as large as 80 nats for 320 dimensional variables. To our knowledge, TRE is the first discriminative MI estimation method that can scale this gracefully. 4.3 MI estimation & representation learning on SpatialMultiOmniglot We applied TRE to the SpatialMultiOmniglot problem taken from [49]4 where characters from Omniglot are spatially stacked in an n× n grid, where each grid position contains characters from a fixed alphabet. Following [49], the individual pixel values of the characters are not considered random variables; rather, we treat the grid as a collection of n2 categorical random variables whose realisations are the characters from the respective alphabet. Pairs of grids, (u,v), are then formed such that corresponding grid-positions contain alphabetically consecutive characters. Given this setup, the ground truth MI can be calculated (see appendix). Each bridge in TRE uses a separable architecture [52] given by log rk(u,v) = g(u)TWkfk(v), where g and fk are 14-layer convolutional ResNets [24] and fk uses the parameter-sharing scheme described in Section 3.2. We note that separable architectures are standard in the MI-based representation learning literature [52]. We construct waymarks using the dimension-wise mixing mechanism (6) with m = n2 (i.e. one dimension is mixed at a time). After learning, we adopt a standard linear evaluation protocol (see e.g. [48]), where we train supervised linear classifiers on top of the output layer g(u) to predict the alphabetic position of each character in u. We compare our results to those reported in [49]. Specifically, we report their baseline method— contrastive predictive coding (CPC) [48], a state-of-the-art representation learning method based on single density-ratio estimation—along with their variant, Wasserstein predictive coding (WPC). Figure 4 shows the results. The left plot shows that only TRE can accurately estimate high MI values of ∼ 35 nats5. The representation learning results (right) show that all single density-ratio baselines degrade significantly in performance as we increase the number of characters in a grid (and hence increase the MI). In contrast, TRE always obtains greater than 97% accuracy. 4.4 Energy-based modelling on MNIST As explained in Section 3.4, TRE can be used estimate an energy-based model of the form φ(x;θ) =∏m−1 k=0 rk(x;θk)q(x), where q is a pre-specified ‘noise’ distribution from which we can sample, and the product of ratios is given by TRE. In this section, we demonstrate that such an approach can 4We mirror their experimental setup as accurately as possible, however we were unable to obtain their code. 5[49] do not provide MI estimates for CPC & WPC, but [52] shows that they are bounded by log batch-size. Gaussian Copula RQ-NSF Figure 5: MNIST samples. Each row pertains to a particular noise distribution. The first block shows exact samples from that distribution. The second & third blocks show MCMC samples from an EBM learned with NCE & TRE, respectively. scale to high-dimensional data, by learning energy-based models of the MNIST handwritten digit dataset [37]. We consider three choices of the noise distribution: a multivariate Gaussian, a Gaussian copula and a rational-quadratic neural spline flow (RQ-NSF) [12] with coupling layers [9, 31]. Each distribution is first fitted to the data via maximum likelihood estimation—see appendix for details. Each of these noise distributions can be expressed as an invertible transformation of a standard normal distribution. That is, each random variable has the form F (z), where z ∼ N (0, I). Since F already encodes useful information about the data distribution, it makes sense to leverage this when constructing the waymarks in TRE. Specifically, we can generate linear combination waymarks via (5) in z-space, and then map them back to x-space, giving xk = F ( √ 1− α2k F −1(x0) + αkF −1(xm)). (10) For a Gaussian, F is linear, and hence (10) is identical to the original waymark mechanism in (5). We use the parameter sharing scheme from Section 3.2 together with quadratic heads. This gives log rk(x) = −fk(x)TWkfk(x)− fk(x)Tbk − ck, where we set fk to be an 18-layer convolutional Resnet and constrain Wk to be positive definite. This constraint enforces an upper limit on the log-density of the EBM, which has been useful in other work [44, 46], and improves results here. We evaluate the learned EBMs quantitatively via estimated log-likelihood in Table 1 and qualitatively via random samples from the model in Figure 5. For both of these evaluations, we employ NUTS [28] to perform annealed MCMC sampling as explained in the appendix. This annealing procedure provides two estimators of the log-likelihood: the Annealed Importance Sampling (AIS) estimator [45] and the more conservative Reverse Annealed Importance Sampling Estimator (RAISE) [3]. The results in Table 1 and Figure 5 show that single ratio estimation performs poorly in highdimensions for simple choices of the noise distribution, and only works well if we use a complex neural density-estimator (RQ-NSF). This illustrates the density-chasm problem explained in Section 2. In contrast, TRE yields improvements for all choices of the noise, as measured by the approximate log-likelihood and the visual fidelity of the samples. TRE’s improvement over the Gaussian noise distribution is particularly large: the bits per dimension (bpd) is around 0.66 lower, corresponding to an improvement of roughly 360 nats. Moreover, the samples are significantly more coherent, and appear to be of higher fidelity than the RQ-NSF samples6, despite the fact that TRE (with Gaussian noise) has a worse log-likelihood. This final point is not contradictory since log-likelihood and sample quality are known to be only loosely connected [61]. 6We emphasise here that the quality of the RQ-NSF model depends on the exact architecture. A larger model may yield better samples. Thus, we do not claim that TRE generally yields superior results in any sense. Finally, we analysed the sensitivity of our results to the construction of the waymarks and include the results in the appendix. Using TRE with a copula noise distribution as an illustrative case, we found that varying the number of waymarks between 5-30 caused only minor changes in the approximate log-likelihoods, no greater than 0.03 bpd. We also found that if we omit the z-space waymark mechanism in (10), and work in x-space, then TRE’s negative log-likelihood increases to 1.33 bpd, as measured by RAISE. This is still significantly better than single-ratio estimation, but does show that the quality of the results depends on the exact waymark mechanism. 5 Conclusion We introduced a new framework—Telescoping density-Ratio Estimation (TRE)—for learning densityratios that, unlike existing discriminative methods, can accurately estimate ratios between extremely different densities in high-dimensions. TRE admits many exciting directions for future work. Firstly, we would like a deeper theoretical understanding of why it is so much more sample-efficient than standard density-ratio estimation. The relationship between TRE and standard methods is structurally similar to the relationship between annealed importance sampling and standard importance sampling. Thus, exploring this connection further may be fruitful. Relatedly, we believe that TRE would benefit from further research on waymark mechanisms. We presented simple mechanisms that have clear utility for both discrete and continuous-valued data. However, we suspect more sophisticated choices may yield improvements, especially if one can leverage domain or task-specific assumptions to intelligently decompose the density-ratio problem. Lastly, whilst this paper has focused on the logistic loss, it would be interesting to more deeply investigate TRE with other discriminative loss functions. Broader Impact As outlined in the introduction, density-ratio estimation is a foundational tool in machine learning with diverse applications. Our work, which improves density-ratio estimation, may therefore increase the scope and power of a wide spectrum of techniques used both in research and real-world settings. The broad utility of our contribution makes it challenging to concretely assess the societal impact of the work. However, we do discuss here two applications of density-ratio estimation with obvious potential for positive & negative impacts on society. Generative Adversarial Networks [15] are a popular class of models which are often trained via density-ratio estimation and are able to generate photo-realistic image/video content. To the extent that TRE can enhance GAN training (a topic we do not treat in this paper), our work could conceivably lead to enhanced ‘deepfakes’, which can be maliciously used in fake-news or identity fraud. More positively, density-ratio estimation is being used to correct for dataset bias, including the presence of skewed demographic factors like race and gender [18]. While we are excited about such applications, we emphasise that density-ratio based methods are not a panacea; it is entirely possible for the technique to introduce new biases when correcting for existing ones. Future work should continue to be mindful of such a possibility, and look for ways to address the issue if it arises. Acknowledgments and Disclosure of Funding Benjamin Rhodes was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh. Kai was supported by Edinburgh Huawei Research Lab in the University of Edinburgh, funded by Huawei Technologies Co. Ltd.
1. What is the main contribution of the paper, and what problem does it solve in machine learning? 2. What are the strengths of the proposed method, particularly in its application and potential impact? 3. What are the weaknesses of the paper, especially regarding the tuning process and the need for theoretical analysis? 4. How does the reviewer assess the significance of the paper's contribution to density ratio estimation and its potential applications? 5. Are there any questions or concerns regarding the paper's experiments and their scalability to higher dimensions?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a better classifier-based estimator for density ratios. The idea is based on annealing. Instead of classify between two distributions, the authors propose to classify between multiple bridge distributions and estimate the density ratio by combining all the intermediate estimates. This approach is particularly suitable to solve the density-chasm problem, where two distributions are very far away from each other and the classification problem becomes too easy. Experiments demonstrate great potential of this method in mutual information estimation and learning energy-based models. Strengths 1. Density ratio estimation is very important in various aspects of machine learning, including representation learning, mutual information estimation, and learning energy-based generative models. It is well known that classifier-based methods for density ratio estimation have bad performance whenever two distributions are too far away from each other, and this has led to many problems, for example the difficulty of choosing the proposal distributions in noise-contrastive estimation, and the excessive variance in classifier-based mutual information estimation. A better density ratio estimator can have broad impact in many applications and is very interesting to the general NeurIPS community. 2. The proposed method is applicable to discrete random variables, which is not always true for competing methods. 3. Experiments convincingly demonstrate the advantages of telescoping over the baseline. Weaknesses 1. Both waymark creation and bridge-building can be tricky to tune. Finding good intermediate samples are hard and have to rely heavily on experimentation and experience, and special tricks like coupling sources of randomness is needed for variance reduction. Unfortunately the experimental results in Section 4.4 demonstrates that choosing right waymarks is indeed critical for the performance (x space vs. latent space). Aside from waymark creation, it is also tricky to find a correct architecture of the estimator to share information across different bridge distributions. 2. It is useful to have theoretical analysis on why telescoping significantly improves the performance of single-ratio density estimation. Since experiments in this paper are on relatively low data space, it is also valuable to know how the telescoping method scales to higher dimensions. Would higher dimension problems require considerably more intermediate distributions? If so, how bad is it?
NIPS
Title Telescoping Density-Ratio Estimation Abstract Density-ratio estimation via classification is a cornerstone of unsupervised learning. It has provided the foundation for state-of-the-art methods in representation learning and generative modelling, with the number of use-cases continuing to proliferate. However, it suffers from a critical limitation: it fails to accurately estimate ratios p/q for which the two densities differ significantly. Empirically, we find this occurs whenever the KL divergence between p and q exceeds tens of nats. To resolve this limitation, we introduce a new framework, telescoping density-ratio estimation (TRE), that enables the estimation of ratios between highly dissimilar densities in high-dimensional spaces. Our experiments demonstrate that TRE can yield substantial improvements over existing single-ratio methods for mutual information estimation, representation learning and energy-based modelling. 1 Introduction Unsupervised learning via density-ratio estimation is a powerful paradigm in machine learning [60] that continues to be a source of major progress in the field. It consists of estimating the ratio p/q from their samples without separately estimating the numerator and denominator. A common way to achieve this is to train a neural network classifier to distinguish between the two sets of samples, since for many loss functions the ratio p/q can be extracted from the optimal classifier [60, 21, 41]. This discriminative approach has been leveraged in diverse areas such as covariate shift adaptation [59, 63], energy-based modelling [22, 4, 53, 64, 36, 19], generative adversarial networks [15, 47, 43], bias correction for generative models [20, 18], likelihood-free inference [50, 62, 8, 13], mutualinformation estimation [2], representation learning [29, 30, 48, 25, 27], Bayesian experimental design [33, 34] and off-policy reward estimation in reinforcement learning [39]. Across this diverse set of applications, density-ratio based methods have consistently yielded state-of-the-art results. Despite the successes of discriminative density-ratio estimation, many existing loss functions share a severe limitation. Whenever the ‘gap’ between p and q is large, the classifier can obtain almost perfect accuracy with a relatively poor estimate of the density ratio. We refer to this failure mode as the density-chasm problem—see Figure 1a for an illustration. We observe empirically that the density-chasm problem manifests whenever the KL-divergence DKL(p ‖ q) exceeds ∼ 20 nats1. This observation accords with recent findings in the mutual information literature regarding the limitations of density-ratio based estimators of the KL [40, 52, 57]. In high dimensions, it can easily occur that two densities p and q will have a KL-divergence measuring in the hundreds of nats, and so the ratio may be virtually intractable to estimate with existing techniques. In this paper, we propose a new framework for estimating density-ratios that can overcome the density-chasm problem. Our solution uses a ‘divide-and-conquer’ strategy composed of two steps. The first step is to gradually transport samples from p to samples from q, creating a chain of intermediate datasets. We then estimate the density-ratio between consecutive datasets along this 1‘nat’ being a unit of information measured using the natural logarithm (base e) 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 100 10 1 10 2 0 10 2 10 1 100 x 0 10 1 101 103 105 de ns ity /ra tio v al ue p(x) q(x) p(x) q(x) 4 6 8 10 12 14 0.0 0.5 1.0 1.5 lo gi st ic lo ss n( ) * TRE (a) Density-ratio estimation between an extremely peaked Gaussian p (σ = 10−6) and a broad Gaussian q (σ = 1) using a single-parameter quadratic classifier (as detailed in section 4.1). Left: A log-log scale plot of the densities and their ratio. Note that p(x) is not visible, since the ratio overlaps it. Right: the solid blue line is the finite-sample logistic loss (Eq. 2) for 10,000 samples. Despite the large sample size, the minimiser (dotted blue line) is far from optimal (dotted black line). The dotted red line is the newly introduced TRE solution, which almost perfectly overlaps with the dotted black line. p q = p p1 × p1 p2 × p2 p3 × p3 q 100 100 x 0 106 de ns ity /ra tio v al ue p(x) p1(x) p(x) p1(x) 10 12 14 0 0.0 1.5 lo gi st ic lo ss n 0( 0) 0 * 0 100 100 x 0 103 p1(x) p2(x) p1(x) p2(x) 6 7 8 9 1 n 1( 1) 1 * 1 100 100 x 0 101 p2(x) p3(x) p2(x) p3(x) 3 4 5 2 n 2( 2) 2 * 2 100 100 x 0 101 p3(x) q(x) p3(x) q(x) 0.5 1.0 1.5 2.0 3 n 3( 3) 3 * 3 (b) Telescoping density-ratio estimation applied to the problem in (a), using the same 10,000 samples from p and q. Top row: a collection of ratios, where p1, p2 and p3 are constructed by deterministically interpolating between samples from p and q. Bottom row: the logistic loss function for each ratio estimation problem. Observe that the finite-sample minimisers of each objective (red dotted lines) are either close to or exactly overlapping their optima (black dotted lines). After estimating each ratio, we then combine them by taking their product. Figure 1: Illustration of standard density-ratio estimation vs. telescoping density-ratio estimation. chain, as illustrated in the top row of Figure 1b. Unlike the original ratio p/q, these ‘chained ratios’ can be accurately estimated via classification (see bottom row). Finally, we combine the chained ratios via a telescoping product to obtain an estimate of the original density-ratio p/q. Thus, we refer to the method as Telescoping density-Ratio Estimation (TRE). We empirically demonstrate that TRE can accurately estimate density-ratios using deep neural networks on high-dimensional problems, significantly outperforming existing single-ratio methods. We show this for two important applications: representation learning via mutual information (MI) estimation and the learning of energy-based models (EBMs). In the context of mutual information estimation, we show that TRE can accurately estimate large MI values of 30+ nats, which is recognised to be an outstanding problem in the literature [52]. However, obtaining accurate MI estimates is often not our sole objective; we also care about learning representations from e.g. audio or image data that are useful for downstream tasks such as classification or clustering. To this end, our experimental results for representation learning confirm that TRE offers substantial gains over a range of existing single-ratio baselines. In the context of energy-based modelling, we show that TRE can be viewed as an extension of noisecontrastive estimation [22] that more efficiently scales to high-dimensional data. Whilst energy-based modelling has been a topic of interest in the machine learning community for some time [56], there has been a recent surge of interest, with a wave of new methods for learning deep EBMs in high dimensions [10, 6, 58, 38, 17, 68]. These methods have shown promising results for image and 3D shape synthesis [66], hybrid modelling [16], and modelling of exchangeable data [67]. However, many of these methods result in expensive/challenging optimisation problems, since they rely on approximate Markov chain Monte Carlo (MCMC) sampling during learning [10, 16, 68], or on adversarial optimisation [6, 17, 68]. In contrast, TRE requires no MCMC during learning and uses a well-defined, non-adversarial, objective function. Moreover, as we show in our mutual information experiments, TRE is applicable to discrete data, whereas all other recent EBM methods only work for continuous random variables. Applicability to discrete data makes TRE especially promising for domains such as natural language processing, where noise-contrastive estimation has been widely used [42, 35, 1]. 2 Discriminative ratio estimation and the density-chasm problem Suppose p and q are two densities for which we have samples, and that q(x) > 0 whenever p(x) > 0. We can estimate the density-ratio r(x) = p(x)/q(x) by training a classifier to distinguish samples from p and q [23, 60, 22]. There are many choices for the loss function of the classifier [60, 51, 21, 41, 52], but in this paper we concentrate on the widely used logistic loss L(θ) = −Ex1∼p log ( r(x1;θ) 1 + r(x1;θ) ) − Ex2∼q log ( 1 1 + r(x2;θ) ) , (1) where r(x;θ) is a non-negative ratio estimating model. To enforce non-negativity, r is typically expressed as the exponential of an unconstrained function such as a neural network. For a correctly specified model, the minimiser of this loss, θ∗, satisfies r(x;θ∗) = p(x)/q(x), without needing any normalisation constraints [22]. Other classification losses do not always have this self-normalising property, but only yield an estimate proportional to the true ratio—see e.g. [52]. The density-chasm problem We experimentally find that density-ratio estimation via classification typically works well when p and q are ‘close’ e.g. the KL divergence between them is less than ∼ 20 nats. However, for sufficiently large gaps, which we refer to as density-chasms, the ratio estimator is often severely inaccurate. This raises the obvious question: what is the cause of such inaccuracy? There are many possible sources of error: the use of misspecified models, imperfect optimisation algorithms, and inaccuracy stemming from Monte Carlo approximations of the expectations in (1). We argue that this mundane final point—Monte Carlo error due to finite sample size—is actually sufficient for inducing the densitychasm problem. Figure 1a depicts a toy problem for which the model is well-specified, and because it is 1-dimensional (w.r.t. θ), optimisation is straightforward using grid-search. And yet, if we use a sample size of n = 10, 000 and minimise the finite-sample loss Ln(θ) = n∑ i=1 − log ( r(xi1; θ) 1 + r(xi1; θ) ) − log ( 1 1 + r(xi2; θ) ) , xi1 ∼ p, xi2 ∼ q, (2) we obtain an estimate θ̂ that is far from the asymptotic minimiser θ∗ = arg min L(θ). Repeating this same experiment for different sample sizes, we can empirically measure the method’s sample efficiency, which is plotted as the blue curve in Figure 2. For the regime plotted, we see that an exponential increase in sample size only yields a linear decrease in estimation error. This empirical result is concordant with theoretical findings that density-ratio based lower bounds on KL divergences are only tight for sample sizes exponential in the the number of nats [40]. Whilst we focus on the logistic loss, we believe the density chasm problem is a broader phenomenon. As shown in the appendix, the issues identified in Figure 1 and the sample inefficiency seen in Figure 2 also occur for other commonly used discriminative loss functions. Thus, when faced with the density-chasm problem, simply increasing the sample size is a highly inefficient solution and not always possible in practice. This begs the question: is there a more intelligent way of using a fixed set of samples from p and q to estimate the ratio? 3 Telescoping density-ratio estimation We introduce a new framework for estimating density-ratios p/q that can overcome the densitychasm problem in a sample-efficient manner. Intuitively, the density-chasm problem arises whenever classifying between p and q is ‘too easy’. This suggests that it may be fruitful to decompose the task into a collection of harder sub-tasks. For convenience, we make the notational switch p ≡ p0, q ≡ pm (which we will keep going forward), and expand the ratio via a telescoping product p0(x) pm(x) = p0(x) p1(x) p1(x) p2(x) . . . pm−2(x) pm−1(x) pm−1(x) pm(x) , (3) where, ideally, each pk is chosen such that a classifier cannot easily distinguish it from its two neighbouring densities. Instead of attempting to build one large ‘bridge’ (i.e. density-ratio) across the density-chasm, we propose to build many small bridges between intermediate ‘waymark’ distributions. The two key components of the method are therefore: 1. Waymark creation. We require a method for gradually transporting samples {x10, . . . ,xn0} from p0 to samples {x1m, . . . ,xnm} from pm. At each step in the transportation, we obtain a new dataset {x1k, . . . ,xnk} where k ∈ {0, . . .m}. Each intermediate dataset can be thought of as samples from an implicit distribution pk, which we refer to as a waymark distribution. 2. Bridge-building: A method for learning a set of parametrised density-ratios between consecutive pairs of waymarks rk(x;θk) ≈ pk(x)/pk+1(x) for k = 0, . . . ,m− 1, where each bridge rk is a non-negative function. We refer to these ratio estimating models as bridges. Note that the parameters of the bridges, {θk}m−1k=0 , can be totally independent or they can be partially shared. An estimate of the original ratio is then given by the product of the bridges r(x;θ) = m−1∏ k=0 rk(x;θk) ≈ m−1∏ k=0 pk(x) pk+1(x) = p0(x) pm(x) , (4) where θ is the concatenation of all θk vectors. Because of the telescoping product in (4), we refer to the method as Telescoping density-Ratio Estimation (TRE). TRE has conceptual ties with a range of methods in optimisation, statistical physics and machine learning that leverage sequences of intermediate distributions, typically between a complex density p and a simple tractable density q. Of particular note are the methods of Simulated Annealing [32], Bridge Sampling & Path Sampling [14] and Annealed Importance Sampling (AIS) [45]. Whilst none of these methods estimate density ratios, and thus serve fundamentally different purposes, they leverage similar ideas. In particular, AIS also computes a chain of density-ratios between artificially constructed intermediate distributions. It typically does this by first defining explicit expressions for the intermediate densities, and then trying to obtain samples via MCMC. In contrast, TRE implicitly defines the intermediate distributions via samples and then tries to learn the ratios. Additionally, in TRE we would like to evaluate the learned ratios in (4) at the same input x while AIS should only evaluate a ratio rk at ‘local’ samples from e.g. pk. 3.1 Waymark creation In this paper, we consider two simple, deterministic waymark creation mechanisms: linear combinations and dimension-wise mixing. We find these mechanisms yield good performance and are computationally cheap. However, we note that other mechanisms are possible, and are a promising topic for future work. Linear combinations. Given a random pair x0 ∼ p0 and xm ∼ pm, define the kth waymark via xk = √ 1− α2k x0 + αkxm, k = 0, . . . ,m (5) where the αk form an increasing sequence from 0 to 1, which control the distance of xk from x0. For all of our experiments (except, for illustration purposes, those depicted in Figure 1), each dimension of p0 and pm has the same variance2 and the coefficients in (5) are chosen to preserve this variance, with the goal being to match basic properties of the waymarks and thereby make consecutive classification problems harder. Dimension-wise mixing. An alternative way to ‘mix’ two vectors is to concatenate different subsets of their dimensions. Given a d-length vector x, we can partition it into m sub-vectors of length d/m, assuming d is divisible by m. We denote this as x = (x[1], . . . ,x[m]), where each x[i] has length d/m. Using this notation, define the kth waymark via xk = (xm[1], . . . , xm[k], x0[k + 1], . . . , x0[m]) k = 0, . . . ,m (6) where, again, x0 ∼ p0 and xm ∼ pm are randomly paired. Number and spacing. Given these two waymark generation mechanisms, we still need to decide the number of waymarks, m, and, in the case of linear combinations, how the αk are spaced in the unit interval. We treat these quantities as hyperparameters, and demonstrate in the experiments (Section 4) that tuning them is feasible with a limited search budget. 3.2 Bridge-building Each bridge rk(x;θk) in (4) can be learned via binary classification using a logistic loss function as described in Section 2. Solving this collection of classification tasks is therefore a multi-task learning (MTL) problem—see [55] for a review. Two key questions in MTL are how to share parameters and how to define a joint objective function. Parameter sharing. We break the construction of the bridges rk(x;θk) into two stages: a (mostly) shared body computing hidden vectors fk(x)3, followed by bridge-specific heads. The body fk is a deep neural network with shared parameters and pre-activation per-hidden-unit scales and biases for each bridge (see appendix for details). Similar parameter sharing schemes have been successfully used in the multi-task learning literature [7, 11]. The heads map the hidden vectors fk(x) to the scalar log rk(x;θk). We use either linear or quadratic mappings depending on the application; the precise parameterisation is stated in each experiment section. TRE loss function. The TRE loss function is given by the average of the m logistic losses LTRE(θ) = 1 m m−1∑ k=0 Lk(θk), (7) Lk(θk) = −Exk∼pk log ( rk(xk;θk) 1 + rk(xk;θk) ) − Exk+1∼pk+1 log ( 1 1 + rk(xk+1;θk) ) . (8) This simple unweighted average works well empirically. More sophisticated multi-task weighting schemes exist [5], but preliminary experiments suggested they were not worth the extra complexity. An important aspect of this loss function is that each ratio estimator rk sees different samples during training. In particular, r0 sees samples close to the real data i.e. from p0 and p1, while the final ratio rm−1 sees data from pm−1 and pm. This creates a potential mismatch between training and deployment, since after learning, we would like to evaluate all ratios at the same input x. In our experiments, we do not find this mismatch to be a problem, suggesting that each ratio, despite seeing different inputs during training, is able to generalise to new test points. We speculate that this generalisation is encouraged by parameter sharing, which allows each ratio-estimator to be indirectly influenced by samples from all waymark distributions. Nevertheless, we think a deeper analysis of this issue of generalisation deserves further work. 3.3 TRE applied to mutual information estimation The mutual information (MI) between two random variables u and v can be written as I(u,v) = Ep(u,v) [ log r(u,v) ] , r(u,v) = p(u,v) p(u)p(v) . (9) 2For MI estimation this always holds, for energy-based modelling this is enforceable via the choice of pm. 3For simplicity, we suppress the parameters of fk, and will do the same for rk in the experiments section. Given samples from the joint density p(u,v), one obtains samples from the product-of-marginals p(u)p(v) by shuffling the v vectors across the dataset. This then enables standard density-ratio estimation to be performed. For TRE, we require waymark samples. To generate these, we take a sample from the joint, x0 = (u,v0), and a sample from the product-of-marginals, xm = (u,vm), where u is held fixed and only v is altered. We then apply a waymark construction mechanism from Section 3.1 to generate xk = (u,vk), for k = 0, . . . ,m. 3.4 TRE applied to energy-based modelling An energy-based model (EBM) is a flexible parametric family {φ(x;θ)} of non-negative functions, where each function is proportional to a probability-density. Given samples from a data distribution with density p(x), the goal of energy-based modelling is to find a parameter θ∗ such that φ(x;θ∗) is ‘close’ to cp(x), for some positive constant c. In this paper, we consider EBMs of the form φ(x;θ) = r(x;θ)q(x), where q is a known density (e.g. a Gaussian or normalising flow) that we can sample from, and r is an unconstrained positive function. Given this parameterisation, the optimal r simply equals the density-ratio p(x)/q(x), and hence the problem of learning an EBM becomes the problem of estimating a density-ratio, which can be solved via TRE. We note that, since TRE actually estimates a product of ratios as stated in Equation 4, the final EBM will be a product-of-experts model [26] of the form φ(x;θ) = ∏m−1 k=0 rk(x;θk)q(x). The estimation of EBMs via density-ratio estimation has been studied in multiple prior works, including noise-contrastive estimation (NCE) [22], which has many appealing theoretical properties [22, 54, 65]. Following NCE, we will refer to the known density q as the ‘noise distribution’. 4 Experiments We include two toy examples illustrating both the correctness of TRE and the fact that it can solve problems which verge on the intractable for standard density ratio estimation. We then demonstrate the utility of TRE on two high-dimensional complex tasks, providing clear evidence that it substantially improves on standard single-ratio baselines. For experiments with continuous random variables, we use the linear combination waymark mechanisms in (5); otherwise, for discrete variables, we use dimension-wise mixing (6). For the linear combination mechanism, we collapse the αk into a single spacing hyperparameter, and grid-search over this value, along with the number of waymarks. Details are in the appendix. 4.1 1d peaked ratio The basic setup is stated in Figure 1a. For TRE, we use quadratic bridges of the form log rk(x) = wkx 2 + bk, where bk is set to its ground truth value (as derived in appendix), and wk is reparametrised as exp(θk) to avoid unnecessary log-scales in Figure 1. The single ratio-estimation results use the same parameterisation (dropping the subscript k). Figure 2 shows the full results. These sample efficiency curves clearly demonstrate that, across all sample sizes, TRE is significantly more accurate than single ratio estimation. In fact, TRE obtains a better solution with 100 samples than single-ratio estimation does with 100,000 samples: a three orders of magnitude improvement. 4.2 High-dimensional ratio with large MI This toy problem has been widely used in the mutual information literature [2, 52]. Let x ∈ R2d be a Gaussian random variable, with block-diagonal covariance matrix, where each block is 2×2 with 1 on the diagonal and 0.8 on the off-diagonal. We then estimate the ratio between this Gaussian and a standard normal distribution. This problem can be viewed as an MI estimation task or an energy-based modelling task—see the appendix for full details. We apply TRE using quadratic bridges of the form: log rk(x) = xTWkx+ bk. The results in Figure 3 show that single ratio estimation becomes severely inaccurate for MI values greater than 20 nats. In contrast, TRE can accurately estimate MI values as large as 80 nats for 320 dimensional variables. To our knowledge, TRE is the first discriminative MI estimation method that can scale this gracefully. 4.3 MI estimation & representation learning on SpatialMultiOmniglot We applied TRE to the SpatialMultiOmniglot problem taken from [49]4 where characters from Omniglot are spatially stacked in an n× n grid, where each grid position contains characters from a fixed alphabet. Following [49], the individual pixel values of the characters are not considered random variables; rather, we treat the grid as a collection of n2 categorical random variables whose realisations are the characters from the respective alphabet. Pairs of grids, (u,v), are then formed such that corresponding grid-positions contain alphabetically consecutive characters. Given this setup, the ground truth MI can be calculated (see appendix). Each bridge in TRE uses a separable architecture [52] given by log rk(u,v) = g(u)TWkfk(v), where g and fk are 14-layer convolutional ResNets [24] and fk uses the parameter-sharing scheme described in Section 3.2. We note that separable architectures are standard in the MI-based representation learning literature [52]. We construct waymarks using the dimension-wise mixing mechanism (6) with m = n2 (i.e. one dimension is mixed at a time). After learning, we adopt a standard linear evaluation protocol (see e.g. [48]), where we train supervised linear classifiers on top of the output layer g(u) to predict the alphabetic position of each character in u. We compare our results to those reported in [49]. Specifically, we report their baseline method— contrastive predictive coding (CPC) [48], a state-of-the-art representation learning method based on single density-ratio estimation—along with their variant, Wasserstein predictive coding (WPC). Figure 4 shows the results. The left plot shows that only TRE can accurately estimate high MI values of ∼ 35 nats5. The representation learning results (right) show that all single density-ratio baselines degrade significantly in performance as we increase the number of characters in a grid (and hence increase the MI). In contrast, TRE always obtains greater than 97% accuracy. 4.4 Energy-based modelling on MNIST As explained in Section 3.4, TRE can be used estimate an energy-based model of the form φ(x;θ) =∏m−1 k=0 rk(x;θk)q(x), where q is a pre-specified ‘noise’ distribution from which we can sample, and the product of ratios is given by TRE. In this section, we demonstrate that such an approach can 4We mirror their experimental setup as accurately as possible, however we were unable to obtain their code. 5[49] do not provide MI estimates for CPC & WPC, but [52] shows that they are bounded by log batch-size. Gaussian Copula RQ-NSF Figure 5: MNIST samples. Each row pertains to a particular noise distribution. The first block shows exact samples from that distribution. The second & third blocks show MCMC samples from an EBM learned with NCE & TRE, respectively. scale to high-dimensional data, by learning energy-based models of the MNIST handwritten digit dataset [37]. We consider three choices of the noise distribution: a multivariate Gaussian, a Gaussian copula and a rational-quadratic neural spline flow (RQ-NSF) [12] with coupling layers [9, 31]. Each distribution is first fitted to the data via maximum likelihood estimation—see appendix for details. Each of these noise distributions can be expressed as an invertible transformation of a standard normal distribution. That is, each random variable has the form F (z), where z ∼ N (0, I). Since F already encodes useful information about the data distribution, it makes sense to leverage this when constructing the waymarks in TRE. Specifically, we can generate linear combination waymarks via (5) in z-space, and then map them back to x-space, giving xk = F ( √ 1− α2k F −1(x0) + αkF −1(xm)). (10) For a Gaussian, F is linear, and hence (10) is identical to the original waymark mechanism in (5). We use the parameter sharing scheme from Section 3.2 together with quadratic heads. This gives log rk(x) = −fk(x)TWkfk(x)− fk(x)Tbk − ck, where we set fk to be an 18-layer convolutional Resnet and constrain Wk to be positive definite. This constraint enforces an upper limit on the log-density of the EBM, which has been useful in other work [44, 46], and improves results here. We evaluate the learned EBMs quantitatively via estimated log-likelihood in Table 1 and qualitatively via random samples from the model in Figure 5. For both of these evaluations, we employ NUTS [28] to perform annealed MCMC sampling as explained in the appendix. This annealing procedure provides two estimators of the log-likelihood: the Annealed Importance Sampling (AIS) estimator [45] and the more conservative Reverse Annealed Importance Sampling Estimator (RAISE) [3]. The results in Table 1 and Figure 5 show that single ratio estimation performs poorly in highdimensions for simple choices of the noise distribution, and only works well if we use a complex neural density-estimator (RQ-NSF). This illustrates the density-chasm problem explained in Section 2. In contrast, TRE yields improvements for all choices of the noise, as measured by the approximate log-likelihood and the visual fidelity of the samples. TRE’s improvement over the Gaussian noise distribution is particularly large: the bits per dimension (bpd) is around 0.66 lower, corresponding to an improvement of roughly 360 nats. Moreover, the samples are significantly more coherent, and appear to be of higher fidelity than the RQ-NSF samples6, despite the fact that TRE (with Gaussian noise) has a worse log-likelihood. This final point is not contradictory since log-likelihood and sample quality are known to be only loosely connected [61]. 6We emphasise here that the quality of the RQ-NSF model depends on the exact architecture. A larger model may yield better samples. Thus, we do not claim that TRE generally yields superior results in any sense. Finally, we analysed the sensitivity of our results to the construction of the waymarks and include the results in the appendix. Using TRE with a copula noise distribution as an illustrative case, we found that varying the number of waymarks between 5-30 caused only minor changes in the approximate log-likelihoods, no greater than 0.03 bpd. We also found that if we omit the z-space waymark mechanism in (10), and work in x-space, then TRE’s negative log-likelihood increases to 1.33 bpd, as measured by RAISE. This is still significantly better than single-ratio estimation, but does show that the quality of the results depends on the exact waymark mechanism. 5 Conclusion We introduced a new framework—Telescoping density-Ratio Estimation (TRE)—for learning densityratios that, unlike existing discriminative methods, can accurately estimate ratios between extremely different densities in high-dimensions. TRE admits many exciting directions for future work. Firstly, we would like a deeper theoretical understanding of why it is so much more sample-efficient than standard density-ratio estimation. The relationship between TRE and standard methods is structurally similar to the relationship between annealed importance sampling and standard importance sampling. Thus, exploring this connection further may be fruitful. Relatedly, we believe that TRE would benefit from further research on waymark mechanisms. We presented simple mechanisms that have clear utility for both discrete and continuous-valued data. However, we suspect more sophisticated choices may yield improvements, especially if one can leverage domain or task-specific assumptions to intelligently decompose the density-ratio problem. Lastly, whilst this paper has focused on the logistic loss, it would be interesting to more deeply investigate TRE with other discriminative loss functions. Broader Impact As outlined in the introduction, density-ratio estimation is a foundational tool in machine learning with diverse applications. Our work, which improves density-ratio estimation, may therefore increase the scope and power of a wide spectrum of techniques used both in research and real-world settings. The broad utility of our contribution makes it challenging to concretely assess the societal impact of the work. However, we do discuss here two applications of density-ratio estimation with obvious potential for positive & negative impacts on society. Generative Adversarial Networks [15] are a popular class of models which are often trained via density-ratio estimation and are able to generate photo-realistic image/video content. To the extent that TRE can enhance GAN training (a topic we do not treat in this paper), our work could conceivably lead to enhanced ‘deepfakes’, which can be maliciously used in fake-news or identity fraud. More positively, density-ratio estimation is being used to correct for dataset bias, including the presence of skewed demographic factors like race and gender [18]. While we are excited about such applications, we emphasise that density-ratio based methods are not a panacea; it is entirely possible for the technique to introduce new biases when correcting for existing ones. Future work should continue to be mindful of such a possibility, and look for ways to address the issue if it arises. Acknowledgments and Disclosure of Funding Benjamin Rhodes was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh. Kai was supported by Edinburgh Huawei Research Lab in the University of Edinburgh, funded by Huawei Technologies Co. Ltd.
1. What is the focus of the paper regarding density ratio estimation? 2. What are the strengths of the proposed approach, particularly in its historical context and practical performance? 3. What are the weaknesses of the paper, specifically regarding the lack of information in the experiments section? 4. Do you have any concerns about the applicability or limitations of the method in different scenarios? 5. Are there any other relevant works or comparisons that could enhance the understanding of the paper's contribution?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Density ratio estimation is hard when the involved densities diverge a lot. The surrogate objective (discriminator) can easily distinguish samples from them and no signal is received. The paper addresses this problem by introducing intermediate distributions in the tradition of adaptive importance sampling or bridge sampling. For these distributions, density ratio estimators can be trained and the product of density ratio estimates give the overall density ratio. Strengths The approach is well motivated from a long history of similar ideas and seems to perform quite well in practice. The authors present easy-to-use ideas for constructing intermediate distributions while admitting that these can be improved. Weaknesses The main weakness is that the authors never reveal which energy-based model they use in the experiments section. This would make it hard to reproduce their work.
NIPS
Title Kernel Memory Networks: A Unifying Framework for Memory Modeling Abstract We consider the problem of training a neural network to store a set of patterns with maximal noise robustness. A solution, in terms of optimal weights and state update rules, is derived by training each individual neuron to perform either kernel classification or interpolation with a minimum weight norm. By applying this method to feed-forward and recurrent networks, we derive optimal models, termed kernel memory networks, that include, as special cases, many of the heteroand auto-associative memory models that have been proposed over the past years, such as modern Hopfield networks and Kanerva’s sparse distributed memory. We modify Kanerva’s model and demonstrate a simple way to design a kernel memory network that can store an exponential number of continuous-valued patterns with a finite basin of attraction. The framework of kernel memory networks offers a simple and intuitive way to understand the storage capacity of previous memory models, and allows for new biological interpretations in terms of dendritic non-linearities and synaptic cross-talk. N/A We consider the problem of training a neural network to store a set of patterns with maximal noise robustness. A solution, in terms of optimal weights and state update rules, is derived by training each individual neuron to perform either kernel classification or interpolation with a minimum weight norm. By applying this method to feed-forward and recurrent networks, we derive optimal models, termed kernel memory networks, that include, as special cases, many of the hetero- and auto-associative memory models that have been proposed over the past years, such as modern Hopfield networks and Kanerva’s sparse distributed memory. We modify Kanerva’s model and demonstrate a simple way to design a kernel memory network that can store an exponential number of continuous-valued patterns with a finite basin of attraction. The framework of kernel memory networks offers a simple and intuitive way to understand the storage capacity of previous memory models, and allows for new biological interpretations in terms of dendritic non-linearities and synaptic cross-talk. 1 Introduction Although the classical work on attractor neural networks reached its peak in the late 1980’s, with the publication of a number of seminal works [e.g., 2, 20, 22, 26], recent years have seen a renewed interest in the topic, motivated by the popularity of the attention mechanism [65], external memoryaugmented neural networks [24, 66], as well as a new generation of energy-based attractor networks models, termed modern Hopfield networks (MHNs), capable of vastly increased memory storage [17, 35]. Recent efforts to understand the theoretical foundation of the attention mechanism have, in fact, shown that it can be linked to Hopfield networks [36, 57], but also to Kanerva’s sparse distributed memory (SDM) [8, 30], and to the field of kernel machines [63, 68]. The last connection is particularly intriguing, in light of the many theoretical commonalities between neural networks and kernel methods [10, 11, 28, 47, 67]. Overall, these results suggest that a unified view can offer new insights into memory modeling and new tools for leveraging memory in machine learning. In this work, we aim to clarify some of the overlap between the fields of memory modeling and statistical learning, by integrating and formalizing a set of theoretical connections between Hopfield networks, the SDM, kernel machines, and neuron models with non-linear dendritic processing. ⇤Joint senior authors. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1.1 Our contribution • We derive a set of normative kernel-based models that describe the general mathematical structure of feed-forward (i.e., hetero-associative) and recurrent (i.e., auto-associative) memory networks that can perform error-free recall of a given set of patterns with maximal robustness to noise. • We show that the normative models include, as special cases, the classical and modern Hopfield network, as well as the SDM. • We derive a simple attractor network model for storing an exponential number of continuous-valued patterns with a finite basin of attraction. We discuss its similarity to attention. Furthermore, we explain how classifiers with non-linear kernels can be interpreted as general forms of neuron models with non-linear dendritic activation functions and synaptic cross-talk. 1.2 Related work Our work is primarily related to [8, 9, 35, 36, 45, 57]. While MHNs are extensively analyzed in [35, 36, 45, 57], the approach is energy-based and makes no statements about the relation between MHNs and kernel methods; a brief comment in [57] mentions some similarity to SVMs, but this is not further explained. The work by [8] focuses on the SDM and its connection to attention. It observes that the classical Hopfield network is a special case of the SDM, but no further generalization is made, and kernel methods are not mentioned. In our work, we place MHNs and the SDM in a broader theoretical context by showing that both models are special suboptimal cases of a family of memory networks that can be derived with a normative kernel-based approach. 2 Background Consider the following simple model of hetero-associative memory: a single-layer feed-forward network consisting of a single output neuron connected to Nin inputs with the weights w 2 RN . The output sout 2 {±1} is given by sout = sgn h w> (sin) ✓ i (1) where sin is the input vector (also called query), ✓ the threshold, and a function that maps the “raw” input to a N -dimensional feature space, where typically N Nin. Suppose that we are given a set of M input-output patterns {⇠µin, ⇠ µ out} M µ=1, in which every entry ⇠ is randomly drawn from {±1} with sparseness f := P(⇠=1). In order for the neuron to store the patterns in a way that maximizes the amount of noise it can tolerate while still being able to recall all patterns without errors, one needs to find the weights that produce the output ⇠µout in response to the input ⇠ µ in, 8µ, and that maximize the smallest Euclidean distance between the inputs and the neuron’s decision boundary. Using Gardner’s formalism [20, 22], this problem can be expressed as argmax w s. t. ⇠µout ⇣ w> (⇠µin) ✓ ⌘ , 8µ kwk2 = w̄ (2) where w̄ > 0 is a constant. This is equivalent to solving min w kwk2 s. t. ⇠ µ out ⇣ w> (⇠µin) ✓ ⌘ 1, 8µ (3) which can be directly identified as the support vector machine (SVM) problem for separable data [13]. The solution to Eq. 3 can today be found in any textbook on basic machine learning methods, and yields an optimal output rule that can be written in a feature and kernel form sout = sgn 2 4 MX µ ↵µ⇠µout (⇠ µ in) > (sin) ✓ 3 5 = sgn 2 4 MX µ ↵µ⇠µoutK(⇠ µ in, sin) ✓ 3 5 (4) where we, in the latter expression, have used the “kernel-trick” K(xi,xj) = (xi)> (xj). The solution depends on the Lagrange coefficients ↵µ 0, many of which are typically zero. Patterns with ↵µ > 0 are called support vectors. 3 Kernel memory networks for binary patterns 3.1 Hetero-associative memory as a feed-forward SVM network We begin by considering a hetero-associative memory network with an arbitrary number Nout output neurons, whose combined state we denote sout. In order for the network as a whole to be able to tolerate a maximal level of noise and still successfully recall its stored memories, we solve Eq. 3 for each neuron independently. As each neuron can have a different classification boundary along with a different set of support vectors, its weights will, in general, be characterized by an independent set of M Lagrange coefficients. To simplify the notation, we represent these coefficients ↵µi , across neurons i and patterns µ, as entries in the matrix A, where (A)iµ = ↵µi . We also combine all thresholds in the vector ✓ = (✓1, . . . , ✓Nout), and all input and output patterns as columns in the matrices Xin = (⇠1in, . . . , ⇠Min ) and Xout = (⇠1out, . . . , ⇠Mout). Finally, we assume that all neurons have the same feature map, so that i = , 8i (see Fig. 1). All functions are applied column-wise when the argument is a matrix, for example (Xin) = ( (⇠1in), . . . , (⇠Min )). The optimal response of the network can now be compactly summarized as follows. Property 1 (Robust hetero-associative memory network). A single-layer hetero-associative memory network trained to recall the patterns Xout in response to the inputs Xin with maximal noise robustness, has an optimal output rule that can be written as sout = sgn h (A Xout) (Xin) > (sin) ✓ i (feature form) (5) = sgn ⇥ (A Xout)K(Xin, sin) ✓ ⇤ (kernel form) (6) where denotes the Hadamard product. 3.2 Auto-associative memory as a recurrent SVM network The hetero-associative network can be made auto-associative by setting Nout = Nin and Xout = Xin. The network is now effectively recurrent, as each neuron can serve both as an input and output simultaneously (see Fig. 1). Consider a recurrent network with N neurons, whose state at time point t is denoted s(t) 2 {±1}N , and whose dynamics evolve according to the update rule s(t+1)i = sgn h w>i (s (t)) ✓i i (7) where wi 2 RN is the weight vector to neuron i = 1, . . . , N . In order to make the patterns {⇠µ}Mµ=1 fixed points of the network dynamics, we train each neuron i independently on every pattern µ to, again, produce the response ⇠µi when the rest of the network is initialized in ⇠ µ. Moreover, we maximize the amount of noise that can be tolerated by the network while maintaining error-free recall by maximizing the smallest Euclidean distance between each neuron’s decision boundary and its inputs. This maximizes the size of the attractor basins [18, 32]. The problem of training the entire network is, in this way, transformed into the problem of training N separate classifiers according to min wi kwik2 s. t. ⇠ µ i ⇣ w>i (⇠ µ) ✓i ⌘ 1, 8µ, i . (8) The solution can be obtained by slightly modifying Property 1, and is stated below. Property 2.1 (Robust auto-associative memory). A recurrent auto-associative memory network trained to recall the patterns X with maximal noise robustness has an optimal synchronous update rule that can be written as s(t+1) = sgn h (A X) (X)> (s(t)) ✓ i (feature form) (9) = sgn h (A X)K(X, s(t)) ✓ i (kernel form) (10) Remark. With a linear feature map (x) = x, the optimal update is reduced to s(t+1) = sgn h (A X)X>s(t) ✓ i (11) where (A X)X> can be identified as the general form of the optimal weight matrix. The solution described by Property 2.1 does not, in general, prohibit a neuron from having selfconnections. Applying this constraint yields the following result. Property 2.2 (Robust auto-associative memory without self-connections). A recurrent autoassociative memory network without self-connections, with the inner-product kernel K(xi,xj) = k(x>i xj), that has been trained to recall the patterns X with maximal noise robustness, has an optimal asynchronous update rule that can be written in the kernel form s(t+1)i = sgn 2 64 MX µ ↵µi ⇠ µ i k 0 @ NX j 6=i ⇠µj s (t) j 1 A ✓i 3 75 . (12) Storage capacity. An intuition for the storage capacity scaling of the hetero- and auto-associative memory networks can be gained by observing that the network as a whole will be able to successfully recall patterns as long as each neuron is able to correctly classify its inputs (or is very unlikely to produce an error). The capacity of the network can thereby be derived from the capacity of each individual neuron. It is well-known that a linear binary classifier can learn to correctly discriminate a maximum of Mmax ⇡ 2DVC random patterns, where DVC is the Vapnik-Chervonenkis dimension of the classifier [15, 20, 40, ch. 40]. For a neuron with N inputs and a linear feature map (x) = x, this results in DVC = N and, thus, the capacity Mmax ⇡ 2N . Suppose, on the other hand, that the kernel is a homogeneous polynomial of degree p, so that K(xi,xj) = (x>i xj)p. In this case, will contain all monomials of degree p composed of the entries in x. As there are O(Np) unique p-degree monomials (see Appendix A.1), the input dimensionality and Mmax will be O(Np). For the exponential kernel, which we can write as K(xi,xj) = exp(x>i xj) = P1 p=0(x > i xj) p/p!, the dimensionality of will be PN p=0 N p = 2N , which yields Mmax ⇠ O(eN ). Special cases. In the following sections, we will show that many of the models of hetero- and auto-associative memory that have been proposed over the past years are special cases of the solutions in Properties 1, 2.1, and 2.2, characterized by specific choices of A, , and K. 3.3 Kanerva’s sparse distributed memory is a feed-forward SVM network The sparse distributed memory (SDM), developed by Kanerva [30], is one of the most famous examples of a hetero-associative memory model. It has lately received much attention in the context of generative memory models [69] and attention layers in transformers [8]. The SDM consists of a register of N memory slots, each associated with an address zi 2 {±1}Nin , i = 1, . . . , N . All addresses are listed as rows in the matrix Z = (z1, . . . , zN )>. The content of each slot is represented by an Nout-dimensional vector, initialized at zero. Suppose that we wish to store the M patterns Xout = (⇠1out, . . . , ⇠Mout) in the addresses Xin = (⇠1in, . . . , ⇠Min ), where all entries are random and bipolar. The basic idea of the SDM is to write the data to, and later read it from, multiple memory slots at once (hence the distributed storage); this ensures a degree of noise-robustness. In mathematical terms, the read-out of the SDM provided with a query sin, is given by sout = sgn h Xout ⇥(ZXin b) > ⇥(Zsin b) i (13) where ⇥ the Heaviside function with bias b = Nin 2r, and r is a parameter that determines the precision of the writing and reading process. Upon comparing Eqs. 13 and 5, the SDM can be directly identified as a special case of a suboptimal feed-forward SVM network in the feature form, with A = 1, ✓ = 0, and the feature map SDM(x) = ⇥(Zx b). When viewed as a kernel method, the function of the SDM is to store the dense addresses Xin as sparse high-dimensional representations SDM, to make it easier to later determine the slots closest to a query sin, and retrieve the relevant data. Capacity. As the SDM is linear in SDM, with DVC ⇡ N , it follows from the analysis in Sec. 3.2 that one should expect the capacity to scale as Mmax ⇠ O(N ). Moreover, one should expect a proportionality constant ⇠0.1, since the SDM is suboptimal relative to the feed-forward SVM network, analogously to how the classical Hopfield network is suboptimal relative to the recurrent SVM network (see Sec. 3.4). This is consistent with earlier proofs [12, 31]. Kernel of an infinite SDM. In practice, an SDM with a large number of memory slots N requires calculations involving a large address matrix Z. This can be avoided by applying the kernel-trick to Eq. 13 in the limit N ! 1, which allows for the output to be computed with sout = sgn ⇥ XoutKSDM(Xin, sin) ⇤ (14) where we have defined the kernel as KSDM(xi,xj) = lim N !1 SDM(xi)> SDM(xj) N (15) in order to ensure convergence. In this section, we will derive this kernel for two different variants of the SDM and demonstrate that both are translation-invariant. It is interesting to note here that SDM is equivalent to a single-layer neural network with N neurons, weights Z, and bias b. This means that KSDM is equivalent to the kernel of an infinitely wide neural network [11, 47, 67]. We begin by noticing that SDM(x) has a geometrical interpretation [8, 31]. It is a binary vector that indicates those memory addresses in Z that differ by at most r bits compared to x. For any two bipolar vectors z and x, the bit-wise difference can be computed as 12 |z x| = 1 4kz xk 2 2. This means that SDM(x) indicates all addresses that lie within a sphere centered at x with radius 2 p r. Consequently, the inner product SDM(xi)> SDM(xj) is the number of addresses located in the overlapping volume of two spheres centered at xi and xj . Although an exact calculation of this quantity can be found in [8, 30], its connection to the SDM kernel has, to the best of our knowledge, not previously been made. We therefore modify the previously published expression with a normalization factor 1/2Nin and state the following property. Property 3.1 (Kernel of an infinite SDM on the hypercube). In the limit N ! 1, the kernel of an SDM with N memory slots, whose addresses are randomly drawn from {±1}Nin , is given by KSDM(xi,xj) = 1 2Nin Nin X i=Nin r b 2c (Nin r i)X j=[Nin r i]+ ✓ Nin i ◆ · ✓ j ◆ (16) where r is the bit-wise error threshold and is the bit-wise difference between xi and xj , given by = 12 |xi xj | = 1 4kxi xjk 2 2. The SDM can also be implemented with continuous addresses, randomly placed on a unit hypersphere of (Nin 1) dimensions, denoted SNin 1. The vector SDM(x) now indicates all addresses that lie within a hyperspherical cap centered at x with an angle arccos(b) between its central axis and the rim. The inner product SDM(xi)> SDM(xj) is the number of addresses located in the overlapping area of two spherical caps centered at xi and xj . While a calculation of this quantity, again, can be found in [8], it has not previously been connected to the kernel of an SDM. We simplify the previously published result and also derive a closed-form approximation, valid for highly sparse SDM (see Appendix B for details). The results are summarized below. Property 3.2 (Kernel of an infinite SDM on the hypersphere). In the limit N ! 1, the kernel of an SDM with N memory slots, whose addresses are randomly drawn from SNin 1, is given by KSDM(xi,xj) = Nin 2 2⇡ Z ↵b ↵x sin(')Nin 2B " 1 tan2(↵x) tan2(') ; Nin 2 2 , 1 2 # d' (17) where ↵x = 12 arccos(x > i xj), ↵b = arccos(b), and B is the incomplete Beta function. In the highly sparse regime, when 0.9 . b < 1 and 1N k SDMk0 ⌧ 1, the kernel can be approximated with KSDM(xi,xj) ⇡ b̂Nin 1 2⇡ B " 1 ✓ b̂ ◆2 ; Nin 2 , 1 2 # (18) where = 12kxi xjk2 and b̂ = sin(arccos(b)). In conclusion, an infinitely large SDM with sparse internal representations SDM, can be represented as a suboptimal case of a feed-forward SVM network with a translation-invariant kernel. 3.4 The modern Hopfield network is a recurrent SVM network The Hopfield network [26] is, arguably, the most well-known model of auto-associative memory. In its modern form [35], it is a recurrent network of N neurons with the state s(t), whose dynamics are governed by the energy and state update rule E = MX µ F 0 @ NX i ⇠µi s (t) i 1 A, s(t+1)i = sgn 2 64 MX µ ⇠µi F 0 0 @ NX j 6=i ⇠µj s (t) j 1 A 3 75 (19) where F is a smooth function, typically a sigmoid, polynomial, or exponential. This “generalized” Hopfield model has a long history [see, e.g., 1, 21, 25, 37] but has received renewed attention in recent years under the name modern Hopfield network (MHN) or dense associative memory [17, 35]. By comparing Eq. 19 with Eq. 12, the state update of the MHN can be identified as a special case of a suboptimal recurrent SVM network in the kernel form, with k = F 0, A = 1, and ✓ = 0 (since f = 0.5). With a linear F 0(x) = x, the MHN reduces to the classical Hopfield network, which is a special case of the recurrent SVM network with the linear kernel k(x>i xj) = x>i xj . Capacity. The storage capacity of the MHN has been shown to depend on the shape of F 0. In the linear case, the capacity is famously limited to ⇠0.1N patterns, depending on the precision of retrieval [2, 42]. If, on the other hand, F 0 is polynomial with degree p, the capacity scales as Mmax ⇠ O(Np) [35], while an exponential F 0 endows the network with a capacity Mmax ⇠ O(eN ) [17]. From the perspective of the kernel memory framework, this scaling directly follows from the analysis in Sec. 3.2 with k = F 0. In fact, in the regime of low errors, the kernel memory framework can also be used to derive a more precise capacity scaling for the classical Hopfield network. We first note that any one-shot learning rule that implies A > 0 is equivalent to an SVM network where every stored pattern is a support vector. Such a heuristic is only likely to be close to the optimal solution and perform well in large networks with very few patterns, as high-dimensional linear SVMs trained on few patterns are highly likely to find solutions where all patterns are support vectors; this effect has been termed support vector proliferation [4]. Restricting the network to this regime limits the capacity to Mmax ⇠ O( N2 logN ), consistent with the result in [42] (see Appendix A.2). Iterative learning rules. The problem of iteratively training MHNs with biologically plausible online learning rules has recently been studied [64], with a resulting storage capacity ranging from ⇠0.16N to ⇠N , depending on the exact implementation. The aim, in general, of such studies is to find a learning rule capable of producing a capacity close to the theoretical maximum ⇠2N . For this purpose, the perspective of kernel memory networks can be particularly helpful, as many of the algorithms that have been developed over the past two decades to optimize SVMs can be utilized for MHNs as well. For example, a network formulated in the feature form can be trained with the stochastic batch perceptron rule [14, 34], the passive aggressive rules [16], the minnorm rule [5], as well as with likelihood maximization applied to logistic regression [29, 46, 59]. In the kernel form, two of the most well-known online algorithms for training linear and non-linear SVMs are the Adatron [3] and the Kernel-Adatron [19]. A performance comparison between iterative learning and the modern Hopfield learning rule can be found in Appendix C. Generalization. Viewing the MHN as a recurrent network of SVMs can also facilitate a more intuitive understanding of its ability to generalize, when used as a conventional classifier. In this setting, one designates a subset of the neurons as input units, and the remaining neurons as outputs. Given a set of input-output associations, one optimizes the memory patterns ⇠µ using, for example, gradient descent. Such an experiment was performed by Krotov and Hopfield [35] on the MNIST data set, using a polynomial non-linearity F (x) = xp. Results showed that the test error first improved as p increased from 2 to 3, but later deteriorated for high degrees, like p = 20. While it may be difficult to explain this behavior within an energy-based framework, it is entirely expected when viewed from the SVM perspective: a kernel of low polynomial degree has too few degrees of freedom to fit the classification boundary in the training set, causing underfitting, while a polynomial of too high degree grants the model too much flexibility, which results in overfitting. The pseudoinverse learning rule. The coefficients in A are, in general, computed numerically, and cannot be written in closed form. However, in the special case when Eq. 8 is underdetermined, meaning M < N , a closed-form (but suboptimal) solution can be obtained using the least-squares SVM method [60]. The result is a generalized form of the pseudoinverse learning rule [50]. See Appendix D for details. 4 Kernel memory networks for continuous patterns 4.1 Auto-associative memory as a recurrent interpolation network So far, we have considered memory models designed to store only bipolar patterns. We now relax this constraint and allow patterns to be continuous-valued. We first observe that any set of patterns X 2 RN⇥M can be made fixed points of the dynamics by training each neuron i to interpolate ⇠µi when the rest of the network is initialized in ⇠µ, for every pattern µ. Assuming that the model is equipped with a kernel that allows for each fixed point to also be attracting, we can ensure that a lower bounding estimate of the size of the attractor basin is maximized by finding the interpolation with minimum weight norm (see Appendix E.1 for proof). These results are summarized below. Property 4 (Robust auto-associative memory with continuous patterns). Suppose that the dynamics of a recurrent auto-associative memory network evolve according to the synchronous update rule s(t+1) = XK†K(X, s(t)) (20) where K = K(X,X) = (X)> (X) is the kernel matrix and K† its Moore-Penrose pseudoinverse, where K† = K 1 if (X) is full column rank. Then, the dynamics of the network is guaranteed to have the fixed points X. Moreover, if the points are attracting, Eq. 20 maximizes a lower bound of the attractor basin sizes. 4.2 A recurrent interpolation network with exponential capacity Memory models for continuous data [e.g., 27, 33, 48] have generally received less attention than their binary counterparts. Recently, however, Ramsauer et al. [57] proposed an energy-based model capable of storing an exponential number of continuous-valued patterns (we will refer to this model as the softmax network). While the structure of this model is similar to Eq. 20, it cannot be analyzed within the framework of Property 4, as it involves a kernel that is neither symmetric nor positive-definite [68]. Nonetheless, we will in this section demonstrate that it is possible to use conventional kernel methods to design an attractor network with exponential capacity for continuous patterns. We utilize the properties of the SDM by using a translation-invariant kernel with a fixed spatial scale r. For the sake of simplicity, we choose the exponential power kernel (Exp ) Kexp (xi,xj) = exp " ✓ 1 r kxi xjk2 ◆ # (21) where , r > 0. These parameters determine the shape of the attractor basin that surrounds each pattern. While r roughly sets the radius of attraction, represents an inverse temperature which changes the steepness of the boundary of the attractor basin. Moreover, as long as the patterns are unique, the kernel matrix is invertible and we have K†exp = K 1 exp [44]. We will now analyze the noise robustness and storage capacity of this model. To make the analysis tractable, we will operate in the regime of low temperatures, meaning the limit ! 1. We first establish the following three properties. Property 5.1 (The Exp network at zero temperature). Given a set of unique patterns {⇠µ}Mµ=1 with minµ,⌫ 6=µk⇠µ ⇠⌫k2 > r, the state update rule for the Exp network at ! 1 reduces to s(t+1) = X⇥(r kX s(t)k2) (22) where ⇥(·) is the Heaviside function with ⇥(0) = e 1 (see Appendix E.2.1). Remark. In geometrical terms, Property 5.1 states that the boundary of the basin of attraction surrounding each pattern becomes a sharp (N 1)-dimensional hypersphere with radius r in the limit ! 1. For lower, finite , the spherical boundary becomes increasingly fuzzy. From the perspective of an energy landscape, each pattern lies in an N -dimensional energy minimum with infinitely steep walls when ! 1. As is lowered, the barriers become progressively smoother. Property 5.2 (Convergence in one step). Given a set of unique patterns {⇠µ}Mµ=1 with minµ,⌫ 6=µk⇠µ ⇠⌫k2 > 2r, the Exp network at ! 1, initialized at s(0) = ⇠µ + ⇠, will converge to ⇠µ in one step if k ⇠k2 < r. Property 5.3 (No spurious attractors). Given a set of unique patterns {⇠µ}Mµ=1 with minµ,⌫ 6=µk⇠µ ⇠⌫k2 > 2r and @µ : k⇠µk2 = r/(1 e 1), the only attractors of the dynamics of the Exp network at ! 1 are the points {⇠µ}Mµ=1, together with 0 if @µ : k⇠µk2 r. Remark. Properties 5.2 and 5.3 can be shown to be true simply by inserting the expression s(0) = ⇠µ + ⇠ in Eq. 22. Assuming no overlaps between the basins of attraction, a quick calculation shows that s(1) = ⇠µ if k ⇠k2 < r. If, on the other hand, the network is initialized such that ks(0) ⇠µk2 > r, 8µ, one always obtains s(2) = ⇠0, where ⇠0 is either 0 or the pattern closest to 0. In other words, the network recalls a pattern only if the initialization is close enough to it. If located far from all patterns, the network assumes an “agnostic” state, represented either by the origin or the pattern closest to the origin (if the origin happens to be located within a basin of attraction). In the following two properties, we evaluate how the radius of attraction r determines the maximum input noise tolerance and storage capacity. Property 6 (Robustness to white noise). Assume that we are given a set of unique patterns ⇠1, . . . , ⇠M ⇠ N (0, IN ) with minµ,⌫ 6=µk⇠µ ⇠⌫k2 > 2r, and that the Exp network is initialized in a distorted pattern s(0) = ⇠µ + ✏, where ✏ ⇠ N (0, 2IN ). Then, at ! 1, the maximum noise variance 2max with which ⇠µ can be recovered in at least 50% of trials is 2max = r 2/N . (23) Property 7 (Exponential storage capacity). At ! 1, and for N 1, the average maximum number of patterns sampled from N (0, IN ) that the Exp network can store and recall without errors is lower-bounded according to Mmax q 2 p ⇡N(1 2 2max) exp " N(1 2 2max) 2 8 # (24) where 2max is the maximum white noise variance tolerated by the network. Remark. Proofs can be found in Appendices E.2.2 and E.2.3. Note that Property 7 is valid in the range 2max . 1/2. While the bounds are fairly tight at the upper end of the range, they become loose when 2max ! 0. In this limit, which is equivalent to r ! 0, the storage capacity tends to infinity, as the risk of interference between patterns vanishes when their radius of attraction becomes infinitesimal. Comparison to the softmax network. If patterns are randomly placed on a hypersphere instead of being normally distributed, the state update rule in Eq. 22 reduces to the form s(t+1) = X⇥(X>s(t) ✓), where ✓ is a fixed threshold. While the capacity remains exponential (see Appendix E.3.1), the basin of attraction surrounding each pattern now forms a spherical cap instead of a ball. We can compare this to the softmax network at zero temperature, given by s(t+1) = lim !1 X softmax( X>s(t)) = X argmax(X>s(t)). This model differs from the Exp only in a replacement of ⇥ with argmax. This changes the shape of the attractor basins from spherical caps to Voronoi cells, which parcellate the entire surface of the hypersphere into a Voronoi diagram (see Fig. 2). The boundary of each basin is now no longer radially symmetric around a pattern, but instead extends as far as possible in all directions. Consequently, at ! 1, the softmax network has larger attractor basins and always converges to one of the stored patterns, regardless of the initialization point (assuming this is not precisely on a boundary). In contrast, the Exp network may converge to the origin if initialized far from all patterns. This can be interpreted as an agnostic response, which indicates that the model cannot associate the input query with any of its stored patterns. 5 Discussion Biological interpretation. Kernel memory networks can be mapped to the anatomical properties of biological neurons. Consider an individual neuron in the feature form of the recurrent network (Eq. 9). The state of neighboring neurons s is first transformed through (s) and thereafter projected to the neuron through the weight matrix (A X) (X)>. When the kernel is polynomial of degree p, so that K(xi,xj) = (x>i xj + 1)p, the transformation (s) consists of all elements in s and their cross-terms, up to degree p. The input to each neuron, in other words, consists of the states of all other neurons, as well as all possible combinations of their multiplicative interactions. This neuron model can be viewed as a generalized form of, for example, the multiconnected neuron [49], the clusteron [43], or the sigma-pi unit [58, p. 73]. These are all perceptrons that include multiplicative input interactions as a means to model synaptic cross-talk and cluster-sensitivity on non-linear dendrites [55] (see Fig. 1). In the kernel form (Eq. 10), each neuron is, again, implicitly comprised of a two-stage process, whereby the raw input s is first transformed through the function K(X, s) and then projected through the weight matrix A X. For any inner-product kernel K = k(x>i xj), this representation can be directly identified as a two-layer neural network, where the hidden layer is defined by the weights X and the activation function k. This interpretation of the recurrent network was recently proposed in [35, 36] and discussed in relation to hippocampal-cortical interactions involved in memory storage and recall; it is particularly reminiscent of the hippocampal indexing theory [6, 61]. However, the kernel form can also be viewed as a network in which each individual neuron is a generalized form of the two-layered pyramidal cell model [53, 54]. This was originally proposed as an abstract neuron model augmented with non-linear dendritic processing [41]. It should be noted, however, that the idea of interpreting kernel methods as neural networks has a longer history, and has been extensively analyzed in the case of, for example, radial basis functions [51, 52]. For further details, see Appendix F. Summary. We have shown that conventional kernel methods can be used to derive the weights for hetero- and auto-associative memory networks storing binary or continuous-valued patterns with maximal noise tolerance. The result is a family of optimal memory models, which we call kernel memory networks, which includes the SDM and MHN as special cases. This unifying framework facilitates an intuitive understanding of the storage capacity of memory models and offers new ways to biologically interpret these in terms of non-linear dendritic integration. This work formalizes the links between kernel methods, attractor networks, and models of dendritic processing. Future work. A unifying theoretical framework for memory modeling can be useful for the development of both improved bio-plausible memory models and for machine learning applications. First, recognizing that there exists algorithms for training optimally noise-robust classifiers and adapting these to biological constraints can aid in the development of normative synaptic three-factor learning rules [23]. Second, the theoretical link between neuron models, kernel functions, and storage capacity enables one to fit kernel memory networks to neurophysiological data and to analyze the computational properties of biophysically informed memory models. Finally, our unifying framework reveals that most memory models differ only in the choice of kernel (model complexity) and Lagrange parameters (model precision). This categorization simplifies the tailoring of memory models to their application, and allows for the design of models whose properties fundamentally can depart from kernel memory networks, by, for example, choosing kernels not associated with a reproducing kernel Hilbert space. Acknowledgments and Disclosure of Funding This study was supported by funding from the Swiss government’s ETH Board of the Swiss Federal Institutes of Technology to the Blue Brain Project, a research center of the École Polytechnique Fédérale de Lausanne (EPFL).
1. What is the focus and contribution of the paper on memory networks? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical foundations? 3. Are there any concerns or limitations regarding the applicability of the derived models? 4. How does the reviewer assess the clarity and potential impact of the paper's content? 5. What are some possible ways to improve memory models based on the presented theory?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper is a theoretical work in which the authors derive a set of normative models that describe the general structure of memory networks that can perform error-free recall of a given set of input patterns. Starting from well-known properties of binary classifiers they show how hetero-associate and auto-associate memories can be formulated as feed-forward and recurrent SVM networks, respectively. They also characterize the storage capacity of these networks in terms of their Vapnik-Chervonenkis dimension. Importantly, they show that previously proposed memory models such as Kanerva's sparse distributed memory and Hopfield networks are special cases of the SVM networks. They also extend their auto-associate memory model to be able to store continuous-valued patterns instead of just binary-valued patterns. Finally, they discuss how the models developed here could potentially be mapped to anatomical properties of biological neurons. Strengths And Weaknesses This paper is a purely theoretical work that develops general expressions for optimal weights in memory networks storing binary or continuous-valued patterns with maximal noise tolerance. It is interesting to think about how the general theory developed here could give rise to better models of memory, and how they could lead to better AI methods. Also, the mapping of kernel attractor networks to anatomical properties of biological neurons is potentially an interesting avenue of research. The presentation is also very clear and it is easy to follow the main ideas. Questions Are there any ideas on how the theory developed here could give rise to better models of memory? Limitations The authors discuss the limitations of their approach.
NIPS
Title Kernel Memory Networks: A Unifying Framework for Memory Modeling Abstract We consider the problem of training a neural network to store a set of patterns with maximal noise robustness. A solution, in terms of optimal weights and state update rules, is derived by training each individual neuron to perform either kernel classification or interpolation with a minimum weight norm. By applying this method to feed-forward and recurrent networks, we derive optimal models, termed kernel memory networks, that include, as special cases, many of the heteroand auto-associative memory models that have been proposed over the past years, such as modern Hopfield networks and Kanerva’s sparse distributed memory. We modify Kanerva’s model and demonstrate a simple way to design a kernel memory network that can store an exponential number of continuous-valued patterns with a finite basin of attraction. The framework of kernel memory networks offers a simple and intuitive way to understand the storage capacity of previous memory models, and allows for new biological interpretations in terms of dendritic non-linearities and synaptic cross-talk. N/A We consider the problem of training a neural network to store a set of patterns with maximal noise robustness. A solution, in terms of optimal weights and state update rules, is derived by training each individual neuron to perform either kernel classification or interpolation with a minimum weight norm. By applying this method to feed-forward and recurrent networks, we derive optimal models, termed kernel memory networks, that include, as special cases, many of the hetero- and auto-associative memory models that have been proposed over the past years, such as modern Hopfield networks and Kanerva’s sparse distributed memory. We modify Kanerva’s model and demonstrate a simple way to design a kernel memory network that can store an exponential number of continuous-valued patterns with a finite basin of attraction. The framework of kernel memory networks offers a simple and intuitive way to understand the storage capacity of previous memory models, and allows for new biological interpretations in terms of dendritic non-linearities and synaptic cross-talk. 1 Introduction Although the classical work on attractor neural networks reached its peak in the late 1980’s, with the publication of a number of seminal works [e.g., 2, 20, 22, 26], recent years have seen a renewed interest in the topic, motivated by the popularity of the attention mechanism [65], external memoryaugmented neural networks [24, 66], as well as a new generation of energy-based attractor networks models, termed modern Hopfield networks (MHNs), capable of vastly increased memory storage [17, 35]. Recent efforts to understand the theoretical foundation of the attention mechanism have, in fact, shown that it can be linked to Hopfield networks [36, 57], but also to Kanerva’s sparse distributed memory (SDM) [8, 30], and to the field of kernel machines [63, 68]. The last connection is particularly intriguing, in light of the many theoretical commonalities between neural networks and kernel methods [10, 11, 28, 47, 67]. Overall, these results suggest that a unified view can offer new insights into memory modeling and new tools for leveraging memory in machine learning. In this work, we aim to clarify some of the overlap between the fields of memory modeling and statistical learning, by integrating and formalizing a set of theoretical connections between Hopfield networks, the SDM, kernel machines, and neuron models with non-linear dendritic processing. ⇤Joint senior authors. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1.1 Our contribution • We derive a set of normative kernel-based models that describe the general mathematical structure of feed-forward (i.e., hetero-associative) and recurrent (i.e., auto-associative) memory networks that can perform error-free recall of a given set of patterns with maximal robustness to noise. • We show that the normative models include, as special cases, the classical and modern Hopfield network, as well as the SDM. • We derive a simple attractor network model for storing an exponential number of continuous-valued patterns with a finite basin of attraction. We discuss its similarity to attention. Furthermore, we explain how classifiers with non-linear kernels can be interpreted as general forms of neuron models with non-linear dendritic activation functions and synaptic cross-talk. 1.2 Related work Our work is primarily related to [8, 9, 35, 36, 45, 57]. While MHNs are extensively analyzed in [35, 36, 45, 57], the approach is energy-based and makes no statements about the relation between MHNs and kernel methods; a brief comment in [57] mentions some similarity to SVMs, but this is not further explained. The work by [8] focuses on the SDM and its connection to attention. It observes that the classical Hopfield network is a special case of the SDM, but no further generalization is made, and kernel methods are not mentioned. In our work, we place MHNs and the SDM in a broader theoretical context by showing that both models are special suboptimal cases of a family of memory networks that can be derived with a normative kernel-based approach. 2 Background Consider the following simple model of hetero-associative memory: a single-layer feed-forward network consisting of a single output neuron connected to Nin inputs with the weights w 2 RN . The output sout 2 {±1} is given by sout = sgn h w> (sin) ✓ i (1) where sin is the input vector (also called query), ✓ the threshold, and a function that maps the “raw” input to a N -dimensional feature space, where typically N Nin. Suppose that we are given a set of M input-output patterns {⇠µin, ⇠ µ out} M µ=1, in which every entry ⇠ is randomly drawn from {±1} with sparseness f := P(⇠=1). In order for the neuron to store the patterns in a way that maximizes the amount of noise it can tolerate while still being able to recall all patterns without errors, one needs to find the weights that produce the output ⇠µout in response to the input ⇠ µ in, 8µ, and that maximize the smallest Euclidean distance between the inputs and the neuron’s decision boundary. Using Gardner’s formalism [20, 22], this problem can be expressed as argmax w s. t. ⇠µout ⇣ w> (⇠µin) ✓ ⌘ , 8µ kwk2 = w̄ (2) where w̄ > 0 is a constant. This is equivalent to solving min w kwk2 s. t. ⇠ µ out ⇣ w> (⇠µin) ✓ ⌘ 1, 8µ (3) which can be directly identified as the support vector machine (SVM) problem for separable data [13]. The solution to Eq. 3 can today be found in any textbook on basic machine learning methods, and yields an optimal output rule that can be written in a feature and kernel form sout = sgn 2 4 MX µ ↵µ⇠µout (⇠ µ in) > (sin) ✓ 3 5 = sgn 2 4 MX µ ↵µ⇠µoutK(⇠ µ in, sin) ✓ 3 5 (4) where we, in the latter expression, have used the “kernel-trick” K(xi,xj) = (xi)> (xj). The solution depends on the Lagrange coefficients ↵µ 0, many of which are typically zero. Patterns with ↵µ > 0 are called support vectors. 3 Kernel memory networks for binary patterns 3.1 Hetero-associative memory as a feed-forward SVM network We begin by considering a hetero-associative memory network with an arbitrary number Nout output neurons, whose combined state we denote sout. In order for the network as a whole to be able to tolerate a maximal level of noise and still successfully recall its stored memories, we solve Eq. 3 for each neuron independently. As each neuron can have a different classification boundary along with a different set of support vectors, its weights will, in general, be characterized by an independent set of M Lagrange coefficients. To simplify the notation, we represent these coefficients ↵µi , across neurons i and patterns µ, as entries in the matrix A, where (A)iµ = ↵µi . We also combine all thresholds in the vector ✓ = (✓1, . . . , ✓Nout), and all input and output patterns as columns in the matrices Xin = (⇠1in, . . . , ⇠Min ) and Xout = (⇠1out, . . . , ⇠Mout). Finally, we assume that all neurons have the same feature map, so that i = , 8i (see Fig. 1). All functions are applied column-wise when the argument is a matrix, for example (Xin) = ( (⇠1in), . . . , (⇠Min )). The optimal response of the network can now be compactly summarized as follows. Property 1 (Robust hetero-associative memory network). A single-layer hetero-associative memory network trained to recall the patterns Xout in response to the inputs Xin with maximal noise robustness, has an optimal output rule that can be written as sout = sgn h (A Xout) (Xin) > (sin) ✓ i (feature form) (5) = sgn ⇥ (A Xout)K(Xin, sin) ✓ ⇤ (kernel form) (6) where denotes the Hadamard product. 3.2 Auto-associative memory as a recurrent SVM network The hetero-associative network can be made auto-associative by setting Nout = Nin and Xout = Xin. The network is now effectively recurrent, as each neuron can serve both as an input and output simultaneously (see Fig. 1). Consider a recurrent network with N neurons, whose state at time point t is denoted s(t) 2 {±1}N , and whose dynamics evolve according to the update rule s(t+1)i = sgn h w>i (s (t)) ✓i i (7) where wi 2 RN is the weight vector to neuron i = 1, . . . , N . In order to make the patterns {⇠µ}Mµ=1 fixed points of the network dynamics, we train each neuron i independently on every pattern µ to, again, produce the response ⇠µi when the rest of the network is initialized in ⇠ µ. Moreover, we maximize the amount of noise that can be tolerated by the network while maintaining error-free recall by maximizing the smallest Euclidean distance between each neuron’s decision boundary and its inputs. This maximizes the size of the attractor basins [18, 32]. The problem of training the entire network is, in this way, transformed into the problem of training N separate classifiers according to min wi kwik2 s. t. ⇠ µ i ⇣ w>i (⇠ µ) ✓i ⌘ 1, 8µ, i . (8) The solution can be obtained by slightly modifying Property 1, and is stated below. Property 2.1 (Robust auto-associative memory). A recurrent auto-associative memory network trained to recall the patterns X with maximal noise robustness has an optimal synchronous update rule that can be written as s(t+1) = sgn h (A X) (X)> (s(t)) ✓ i (feature form) (9) = sgn h (A X)K(X, s(t)) ✓ i (kernel form) (10) Remark. With a linear feature map (x) = x, the optimal update is reduced to s(t+1) = sgn h (A X)X>s(t) ✓ i (11) where (A X)X> can be identified as the general form of the optimal weight matrix. The solution described by Property 2.1 does not, in general, prohibit a neuron from having selfconnections. Applying this constraint yields the following result. Property 2.2 (Robust auto-associative memory without self-connections). A recurrent autoassociative memory network without self-connections, with the inner-product kernel K(xi,xj) = k(x>i xj), that has been trained to recall the patterns X with maximal noise robustness, has an optimal asynchronous update rule that can be written in the kernel form s(t+1)i = sgn 2 64 MX µ ↵µi ⇠ µ i k 0 @ NX j 6=i ⇠µj s (t) j 1 A ✓i 3 75 . (12) Storage capacity. An intuition for the storage capacity scaling of the hetero- and auto-associative memory networks can be gained by observing that the network as a whole will be able to successfully recall patterns as long as each neuron is able to correctly classify its inputs (or is very unlikely to produce an error). The capacity of the network can thereby be derived from the capacity of each individual neuron. It is well-known that a linear binary classifier can learn to correctly discriminate a maximum of Mmax ⇡ 2DVC random patterns, where DVC is the Vapnik-Chervonenkis dimension of the classifier [15, 20, 40, ch. 40]. For a neuron with N inputs and a linear feature map (x) = x, this results in DVC = N and, thus, the capacity Mmax ⇡ 2N . Suppose, on the other hand, that the kernel is a homogeneous polynomial of degree p, so that K(xi,xj) = (x>i xj)p. In this case, will contain all monomials of degree p composed of the entries in x. As there are O(Np) unique p-degree monomials (see Appendix A.1), the input dimensionality and Mmax will be O(Np). For the exponential kernel, which we can write as K(xi,xj) = exp(x>i xj) = P1 p=0(x > i xj) p/p!, the dimensionality of will be PN p=0 N p = 2N , which yields Mmax ⇠ O(eN ). Special cases. In the following sections, we will show that many of the models of hetero- and auto-associative memory that have been proposed over the past years are special cases of the solutions in Properties 1, 2.1, and 2.2, characterized by specific choices of A, , and K. 3.3 Kanerva’s sparse distributed memory is a feed-forward SVM network The sparse distributed memory (SDM), developed by Kanerva [30], is one of the most famous examples of a hetero-associative memory model. It has lately received much attention in the context of generative memory models [69] and attention layers in transformers [8]. The SDM consists of a register of N memory slots, each associated with an address zi 2 {±1}Nin , i = 1, . . . , N . All addresses are listed as rows in the matrix Z = (z1, . . . , zN )>. The content of each slot is represented by an Nout-dimensional vector, initialized at zero. Suppose that we wish to store the M patterns Xout = (⇠1out, . . . , ⇠Mout) in the addresses Xin = (⇠1in, . . . , ⇠Min ), where all entries are random and bipolar. The basic idea of the SDM is to write the data to, and later read it from, multiple memory slots at once (hence the distributed storage); this ensures a degree of noise-robustness. In mathematical terms, the read-out of the SDM provided with a query sin, is given by sout = sgn h Xout ⇥(ZXin b) > ⇥(Zsin b) i (13) where ⇥ the Heaviside function with bias b = Nin 2r, and r is a parameter that determines the precision of the writing and reading process. Upon comparing Eqs. 13 and 5, the SDM can be directly identified as a special case of a suboptimal feed-forward SVM network in the feature form, with A = 1, ✓ = 0, and the feature map SDM(x) = ⇥(Zx b). When viewed as a kernel method, the function of the SDM is to store the dense addresses Xin as sparse high-dimensional representations SDM, to make it easier to later determine the slots closest to a query sin, and retrieve the relevant data. Capacity. As the SDM is linear in SDM, with DVC ⇡ N , it follows from the analysis in Sec. 3.2 that one should expect the capacity to scale as Mmax ⇠ O(N ). Moreover, one should expect a proportionality constant ⇠0.1, since the SDM is suboptimal relative to the feed-forward SVM network, analogously to how the classical Hopfield network is suboptimal relative to the recurrent SVM network (see Sec. 3.4). This is consistent with earlier proofs [12, 31]. Kernel of an infinite SDM. In practice, an SDM with a large number of memory slots N requires calculations involving a large address matrix Z. This can be avoided by applying the kernel-trick to Eq. 13 in the limit N ! 1, which allows for the output to be computed with sout = sgn ⇥ XoutKSDM(Xin, sin) ⇤ (14) where we have defined the kernel as KSDM(xi,xj) = lim N !1 SDM(xi)> SDM(xj) N (15) in order to ensure convergence. In this section, we will derive this kernel for two different variants of the SDM and demonstrate that both are translation-invariant. It is interesting to note here that SDM is equivalent to a single-layer neural network with N neurons, weights Z, and bias b. This means that KSDM is equivalent to the kernel of an infinitely wide neural network [11, 47, 67]. We begin by noticing that SDM(x) has a geometrical interpretation [8, 31]. It is a binary vector that indicates those memory addresses in Z that differ by at most r bits compared to x. For any two bipolar vectors z and x, the bit-wise difference can be computed as 12 |z x| = 1 4kz xk 2 2. This means that SDM(x) indicates all addresses that lie within a sphere centered at x with radius 2 p r. Consequently, the inner product SDM(xi)> SDM(xj) is the number of addresses located in the overlapping volume of two spheres centered at xi and xj . Although an exact calculation of this quantity can be found in [8, 30], its connection to the SDM kernel has, to the best of our knowledge, not previously been made. We therefore modify the previously published expression with a normalization factor 1/2Nin and state the following property. Property 3.1 (Kernel of an infinite SDM on the hypercube). In the limit N ! 1, the kernel of an SDM with N memory slots, whose addresses are randomly drawn from {±1}Nin , is given by KSDM(xi,xj) = 1 2Nin Nin X i=Nin r b 2c (Nin r i)X j=[Nin r i]+ ✓ Nin i ◆ · ✓ j ◆ (16) where r is the bit-wise error threshold and is the bit-wise difference between xi and xj , given by = 12 |xi xj | = 1 4kxi xjk 2 2. The SDM can also be implemented with continuous addresses, randomly placed on a unit hypersphere of (Nin 1) dimensions, denoted SNin 1. The vector SDM(x) now indicates all addresses that lie within a hyperspherical cap centered at x with an angle arccos(b) between its central axis and the rim. The inner product SDM(xi)> SDM(xj) is the number of addresses located in the overlapping area of two spherical caps centered at xi and xj . While a calculation of this quantity, again, can be found in [8], it has not previously been connected to the kernel of an SDM. We simplify the previously published result and also derive a closed-form approximation, valid for highly sparse SDM (see Appendix B for details). The results are summarized below. Property 3.2 (Kernel of an infinite SDM on the hypersphere). In the limit N ! 1, the kernel of an SDM with N memory slots, whose addresses are randomly drawn from SNin 1, is given by KSDM(xi,xj) = Nin 2 2⇡ Z ↵b ↵x sin(')Nin 2B " 1 tan2(↵x) tan2(') ; Nin 2 2 , 1 2 # d' (17) where ↵x = 12 arccos(x > i xj), ↵b = arccos(b), and B is the incomplete Beta function. In the highly sparse regime, when 0.9 . b < 1 and 1N k SDMk0 ⌧ 1, the kernel can be approximated with KSDM(xi,xj) ⇡ b̂Nin 1 2⇡ B " 1 ✓ b̂ ◆2 ; Nin 2 , 1 2 # (18) where = 12kxi xjk2 and b̂ = sin(arccos(b)). In conclusion, an infinitely large SDM with sparse internal representations SDM, can be represented as a suboptimal case of a feed-forward SVM network with a translation-invariant kernel. 3.4 The modern Hopfield network is a recurrent SVM network The Hopfield network [26] is, arguably, the most well-known model of auto-associative memory. In its modern form [35], it is a recurrent network of N neurons with the state s(t), whose dynamics are governed by the energy and state update rule E = MX µ F 0 @ NX i ⇠µi s (t) i 1 A, s(t+1)i = sgn 2 64 MX µ ⇠µi F 0 0 @ NX j 6=i ⇠µj s (t) j 1 A 3 75 (19) where F is a smooth function, typically a sigmoid, polynomial, or exponential. This “generalized” Hopfield model has a long history [see, e.g., 1, 21, 25, 37] but has received renewed attention in recent years under the name modern Hopfield network (MHN) or dense associative memory [17, 35]. By comparing Eq. 19 with Eq. 12, the state update of the MHN can be identified as a special case of a suboptimal recurrent SVM network in the kernel form, with k = F 0, A = 1, and ✓ = 0 (since f = 0.5). With a linear F 0(x) = x, the MHN reduces to the classical Hopfield network, which is a special case of the recurrent SVM network with the linear kernel k(x>i xj) = x>i xj . Capacity. The storage capacity of the MHN has been shown to depend on the shape of F 0. In the linear case, the capacity is famously limited to ⇠0.1N patterns, depending on the precision of retrieval [2, 42]. If, on the other hand, F 0 is polynomial with degree p, the capacity scales as Mmax ⇠ O(Np) [35], while an exponential F 0 endows the network with a capacity Mmax ⇠ O(eN ) [17]. From the perspective of the kernel memory framework, this scaling directly follows from the analysis in Sec. 3.2 with k = F 0. In fact, in the regime of low errors, the kernel memory framework can also be used to derive a more precise capacity scaling for the classical Hopfield network. We first note that any one-shot learning rule that implies A > 0 is equivalent to an SVM network where every stored pattern is a support vector. Such a heuristic is only likely to be close to the optimal solution and perform well in large networks with very few patterns, as high-dimensional linear SVMs trained on few patterns are highly likely to find solutions where all patterns are support vectors; this effect has been termed support vector proliferation [4]. Restricting the network to this regime limits the capacity to Mmax ⇠ O( N2 logN ), consistent with the result in [42] (see Appendix A.2). Iterative learning rules. The problem of iteratively training MHNs with biologically plausible online learning rules has recently been studied [64], with a resulting storage capacity ranging from ⇠0.16N to ⇠N , depending on the exact implementation. The aim, in general, of such studies is to find a learning rule capable of producing a capacity close to the theoretical maximum ⇠2N . For this purpose, the perspective of kernel memory networks can be particularly helpful, as many of the algorithms that have been developed over the past two decades to optimize SVMs can be utilized for MHNs as well. For example, a network formulated in the feature form can be trained with the stochastic batch perceptron rule [14, 34], the passive aggressive rules [16], the minnorm rule [5], as well as with likelihood maximization applied to logistic regression [29, 46, 59]. In the kernel form, two of the most well-known online algorithms for training linear and non-linear SVMs are the Adatron [3] and the Kernel-Adatron [19]. A performance comparison between iterative learning and the modern Hopfield learning rule can be found in Appendix C. Generalization. Viewing the MHN as a recurrent network of SVMs can also facilitate a more intuitive understanding of its ability to generalize, when used as a conventional classifier. In this setting, one designates a subset of the neurons as input units, and the remaining neurons as outputs. Given a set of input-output associations, one optimizes the memory patterns ⇠µ using, for example, gradient descent. Such an experiment was performed by Krotov and Hopfield [35] on the MNIST data set, using a polynomial non-linearity F (x) = xp. Results showed that the test error first improved as p increased from 2 to 3, but later deteriorated for high degrees, like p = 20. While it may be difficult to explain this behavior within an energy-based framework, it is entirely expected when viewed from the SVM perspective: a kernel of low polynomial degree has too few degrees of freedom to fit the classification boundary in the training set, causing underfitting, while a polynomial of too high degree grants the model too much flexibility, which results in overfitting. The pseudoinverse learning rule. The coefficients in A are, in general, computed numerically, and cannot be written in closed form. However, in the special case when Eq. 8 is underdetermined, meaning M < N , a closed-form (but suboptimal) solution can be obtained using the least-squares SVM method [60]. The result is a generalized form of the pseudoinverse learning rule [50]. See Appendix D for details. 4 Kernel memory networks for continuous patterns 4.1 Auto-associative memory as a recurrent interpolation network So far, we have considered memory models designed to store only bipolar patterns. We now relax this constraint and allow patterns to be continuous-valued. We first observe that any set of patterns X 2 RN⇥M can be made fixed points of the dynamics by training each neuron i to interpolate ⇠µi when the rest of the network is initialized in ⇠µ, for every pattern µ. Assuming that the model is equipped with a kernel that allows for each fixed point to also be attracting, we can ensure that a lower bounding estimate of the size of the attractor basin is maximized by finding the interpolation with minimum weight norm (see Appendix E.1 for proof). These results are summarized below. Property 4 (Robust auto-associative memory with continuous patterns). Suppose that the dynamics of a recurrent auto-associative memory network evolve according to the synchronous update rule s(t+1) = XK†K(X, s(t)) (20) where K = K(X,X) = (X)> (X) is the kernel matrix and K† its Moore-Penrose pseudoinverse, where K† = K 1 if (X) is full column rank. Then, the dynamics of the network is guaranteed to have the fixed points X. Moreover, if the points are attracting, Eq. 20 maximizes a lower bound of the attractor basin sizes. 4.2 A recurrent interpolation network with exponential capacity Memory models for continuous data [e.g., 27, 33, 48] have generally received less attention than their binary counterparts. Recently, however, Ramsauer et al. [57] proposed an energy-based model capable of storing an exponential number of continuous-valued patterns (we will refer to this model as the softmax network). While the structure of this model is similar to Eq. 20, it cannot be analyzed within the framework of Property 4, as it involves a kernel that is neither symmetric nor positive-definite [68]. Nonetheless, we will in this section demonstrate that it is possible to use conventional kernel methods to design an attractor network with exponential capacity for continuous patterns. We utilize the properties of the SDM by using a translation-invariant kernel with a fixed spatial scale r. For the sake of simplicity, we choose the exponential power kernel (Exp ) Kexp (xi,xj) = exp " ✓ 1 r kxi xjk2 ◆ # (21) where , r > 0. These parameters determine the shape of the attractor basin that surrounds each pattern. While r roughly sets the radius of attraction, represents an inverse temperature which changes the steepness of the boundary of the attractor basin. Moreover, as long as the patterns are unique, the kernel matrix is invertible and we have K†exp = K 1 exp [44]. We will now analyze the noise robustness and storage capacity of this model. To make the analysis tractable, we will operate in the regime of low temperatures, meaning the limit ! 1. We first establish the following three properties. Property 5.1 (The Exp network at zero temperature). Given a set of unique patterns {⇠µ}Mµ=1 with minµ,⌫ 6=µk⇠µ ⇠⌫k2 > r, the state update rule for the Exp network at ! 1 reduces to s(t+1) = X⇥(r kX s(t)k2) (22) where ⇥(·) is the Heaviside function with ⇥(0) = e 1 (see Appendix E.2.1). Remark. In geometrical terms, Property 5.1 states that the boundary of the basin of attraction surrounding each pattern becomes a sharp (N 1)-dimensional hypersphere with radius r in the limit ! 1. For lower, finite , the spherical boundary becomes increasingly fuzzy. From the perspective of an energy landscape, each pattern lies in an N -dimensional energy minimum with infinitely steep walls when ! 1. As is lowered, the barriers become progressively smoother. Property 5.2 (Convergence in one step). Given a set of unique patterns {⇠µ}Mµ=1 with minµ,⌫ 6=µk⇠µ ⇠⌫k2 > 2r, the Exp network at ! 1, initialized at s(0) = ⇠µ + ⇠, will converge to ⇠µ in one step if k ⇠k2 < r. Property 5.3 (No spurious attractors). Given a set of unique patterns {⇠µ}Mµ=1 with minµ,⌫ 6=µk⇠µ ⇠⌫k2 > 2r and @µ : k⇠µk2 = r/(1 e 1), the only attractors of the dynamics of the Exp network at ! 1 are the points {⇠µ}Mµ=1, together with 0 if @µ : k⇠µk2 r. Remark. Properties 5.2 and 5.3 can be shown to be true simply by inserting the expression s(0) = ⇠µ + ⇠ in Eq. 22. Assuming no overlaps between the basins of attraction, a quick calculation shows that s(1) = ⇠µ if k ⇠k2 < r. If, on the other hand, the network is initialized such that ks(0) ⇠µk2 > r, 8µ, one always obtains s(2) = ⇠0, where ⇠0 is either 0 or the pattern closest to 0. In other words, the network recalls a pattern only if the initialization is close enough to it. If located far from all patterns, the network assumes an “agnostic” state, represented either by the origin or the pattern closest to the origin (if the origin happens to be located within a basin of attraction). In the following two properties, we evaluate how the radius of attraction r determines the maximum input noise tolerance and storage capacity. Property 6 (Robustness to white noise). Assume that we are given a set of unique patterns ⇠1, . . . , ⇠M ⇠ N (0, IN ) with minµ,⌫ 6=µk⇠µ ⇠⌫k2 > 2r, and that the Exp network is initialized in a distorted pattern s(0) = ⇠µ + ✏, where ✏ ⇠ N (0, 2IN ). Then, at ! 1, the maximum noise variance 2max with which ⇠µ can be recovered in at least 50% of trials is 2max = r 2/N . (23) Property 7 (Exponential storage capacity). At ! 1, and for N 1, the average maximum number of patterns sampled from N (0, IN ) that the Exp network can store and recall without errors is lower-bounded according to Mmax q 2 p ⇡N(1 2 2max) exp " N(1 2 2max) 2 8 # (24) where 2max is the maximum white noise variance tolerated by the network. Remark. Proofs can be found in Appendices E.2.2 and E.2.3. Note that Property 7 is valid in the range 2max . 1/2. While the bounds are fairly tight at the upper end of the range, they become loose when 2max ! 0. In this limit, which is equivalent to r ! 0, the storage capacity tends to infinity, as the risk of interference between patterns vanishes when their radius of attraction becomes infinitesimal. Comparison to the softmax network. If patterns are randomly placed on a hypersphere instead of being normally distributed, the state update rule in Eq. 22 reduces to the form s(t+1) = X⇥(X>s(t) ✓), where ✓ is a fixed threshold. While the capacity remains exponential (see Appendix E.3.1), the basin of attraction surrounding each pattern now forms a spherical cap instead of a ball. We can compare this to the softmax network at zero temperature, given by s(t+1) = lim !1 X softmax( X>s(t)) = X argmax(X>s(t)). This model differs from the Exp only in a replacement of ⇥ with argmax. This changes the shape of the attractor basins from spherical caps to Voronoi cells, which parcellate the entire surface of the hypersphere into a Voronoi diagram (see Fig. 2). The boundary of each basin is now no longer radially symmetric around a pattern, but instead extends as far as possible in all directions. Consequently, at ! 1, the softmax network has larger attractor basins and always converges to one of the stored patterns, regardless of the initialization point (assuming this is not precisely on a boundary). In contrast, the Exp network may converge to the origin if initialized far from all patterns. This can be interpreted as an agnostic response, which indicates that the model cannot associate the input query with any of its stored patterns. 5 Discussion Biological interpretation. Kernel memory networks can be mapped to the anatomical properties of biological neurons. Consider an individual neuron in the feature form of the recurrent network (Eq. 9). The state of neighboring neurons s is first transformed through (s) and thereafter projected to the neuron through the weight matrix (A X) (X)>. When the kernel is polynomial of degree p, so that K(xi,xj) = (x>i xj + 1)p, the transformation (s) consists of all elements in s and their cross-terms, up to degree p. The input to each neuron, in other words, consists of the states of all other neurons, as well as all possible combinations of their multiplicative interactions. This neuron model can be viewed as a generalized form of, for example, the multiconnected neuron [49], the clusteron [43], or the sigma-pi unit [58, p. 73]. These are all perceptrons that include multiplicative input interactions as a means to model synaptic cross-talk and cluster-sensitivity on non-linear dendrites [55] (see Fig. 1). In the kernel form (Eq. 10), each neuron is, again, implicitly comprised of a two-stage process, whereby the raw input s is first transformed through the function K(X, s) and then projected through the weight matrix A X. For any inner-product kernel K = k(x>i xj), this representation can be directly identified as a two-layer neural network, where the hidden layer is defined by the weights X and the activation function k. This interpretation of the recurrent network was recently proposed in [35, 36] and discussed in relation to hippocampal-cortical interactions involved in memory storage and recall; it is particularly reminiscent of the hippocampal indexing theory [6, 61]. However, the kernel form can also be viewed as a network in which each individual neuron is a generalized form of the two-layered pyramidal cell model [53, 54]. This was originally proposed as an abstract neuron model augmented with non-linear dendritic processing [41]. It should be noted, however, that the idea of interpreting kernel methods as neural networks has a longer history, and has been extensively analyzed in the case of, for example, radial basis functions [51, 52]. For further details, see Appendix F. Summary. We have shown that conventional kernel methods can be used to derive the weights for hetero- and auto-associative memory networks storing binary or continuous-valued patterns with maximal noise tolerance. The result is a family of optimal memory models, which we call kernel memory networks, which includes the SDM and MHN as special cases. This unifying framework facilitates an intuitive understanding of the storage capacity of memory models and offers new ways to biologically interpret these in terms of non-linear dendritic integration. This work formalizes the links between kernel methods, attractor networks, and models of dendritic processing. Future work. A unifying theoretical framework for memory modeling can be useful for the development of both improved bio-plausible memory models and for machine learning applications. First, recognizing that there exists algorithms for training optimally noise-robust classifiers and adapting these to biological constraints can aid in the development of normative synaptic three-factor learning rules [23]. Second, the theoretical link between neuron models, kernel functions, and storage capacity enables one to fit kernel memory networks to neurophysiological data and to analyze the computational properties of biophysically informed memory models. Finally, our unifying framework reveals that most memory models differ only in the choice of kernel (model complexity) and Lagrange parameters (model precision). This categorization simplifies the tailoring of memory models to their application, and allows for the design of models whose properties fundamentally can depart from kernel memory networks, by, for example, choosing kernels not associated with a reproducing kernel Hilbert space. Acknowledgments and Disclosure of Funding This study was supported by funding from the Swiss government’s ETH Board of the Swiss Federal Institutes of Technology to the Blue Brain Project, a research center of the École Polytechnique Fédérale de Lausanne (EPFL).
1. What is the focus and contribution of the paper regarding hetero- and auto-associative memory models? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and potential impact on future progress in the field? 3. Do you have any concerns or suggestions regarding the paper's limitations or potential negative societal impact? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work provides a nice framework to describe hetero- and auto-associative memory models. This framework can be used to describe dynamics and capacity of different existing models, such as SDMs and Hopfield networks. As a background, the authors provide an explanation to kernel methods, that are then needed to understand the contributions of the paper: they show how a kernel trick for "one hidden layer nets" is equivalent to finding the best set of weights with minimum norm, and a general property of multilevel feedforward kernels. The first contribution (Section 3), consists in defining auto associative memory models with binary states as recurrent SVMs. Particularly, they state that the larger the attractor basins, the more robust the model. This is then used to draw connections with prowler models in the literature, such as SDMs and Hopfield models. This study is then extended to include recent models, generalized to continuous patterns. Strengths And Weaknesses Pros: Very nice paper, well written and straight to the point: it does exactly what is stated in the title. Cons: I believe the paper is suffering the 9 pages limit, as the authors use most of the space to derive the results, while having a little more space would allow interesting discussions. Particularly, the introduction and conclusion do not address future works/ possible implication of the work in any way. Questions You state that your "hope is that a proper understanding of the theoretical overlap of contemporary memory models can facilitate the development" of new models. Generally, I agree that fundamental works that are able to identify underlying common principles of different models are important in the literature. However, in your specific case, how do you think your formulation will help future progress in the field? How will your work influence future applications of AM models? Minor: Remove one of the two remarks at lines 82 and 102 Limitations The very few limitations (it is a theoretical paper) are discussed, and I do not see any negative societal impact of this work.
NIPS
Title Kernel Memory Networks: A Unifying Framework for Memory Modeling Abstract We consider the problem of training a neural network to store a set of patterns with maximal noise robustness. A solution, in terms of optimal weights and state update rules, is derived by training each individual neuron to perform either kernel classification or interpolation with a minimum weight norm. By applying this method to feed-forward and recurrent networks, we derive optimal models, termed kernel memory networks, that include, as special cases, many of the heteroand auto-associative memory models that have been proposed over the past years, such as modern Hopfield networks and Kanerva’s sparse distributed memory. We modify Kanerva’s model and demonstrate a simple way to design a kernel memory network that can store an exponential number of continuous-valued patterns with a finite basin of attraction. The framework of kernel memory networks offers a simple and intuitive way to understand the storage capacity of previous memory models, and allows for new biological interpretations in terms of dendritic non-linearities and synaptic cross-talk. N/A We consider the problem of training a neural network to store a set of patterns with maximal noise robustness. A solution, in terms of optimal weights and state update rules, is derived by training each individual neuron to perform either kernel classification or interpolation with a minimum weight norm. By applying this method to feed-forward and recurrent networks, we derive optimal models, termed kernel memory networks, that include, as special cases, many of the hetero- and auto-associative memory models that have been proposed over the past years, such as modern Hopfield networks and Kanerva’s sparse distributed memory. We modify Kanerva’s model and demonstrate a simple way to design a kernel memory network that can store an exponential number of continuous-valued patterns with a finite basin of attraction. The framework of kernel memory networks offers a simple and intuitive way to understand the storage capacity of previous memory models, and allows for new biological interpretations in terms of dendritic non-linearities and synaptic cross-talk. 1 Introduction Although the classical work on attractor neural networks reached its peak in the late 1980’s, with the publication of a number of seminal works [e.g., 2, 20, 22, 26], recent years have seen a renewed interest in the topic, motivated by the popularity of the attention mechanism [65], external memoryaugmented neural networks [24, 66], as well as a new generation of energy-based attractor networks models, termed modern Hopfield networks (MHNs), capable of vastly increased memory storage [17, 35]. Recent efforts to understand the theoretical foundation of the attention mechanism have, in fact, shown that it can be linked to Hopfield networks [36, 57], but also to Kanerva’s sparse distributed memory (SDM) [8, 30], and to the field of kernel machines [63, 68]. The last connection is particularly intriguing, in light of the many theoretical commonalities between neural networks and kernel methods [10, 11, 28, 47, 67]. Overall, these results suggest that a unified view can offer new insights into memory modeling and new tools for leveraging memory in machine learning. In this work, we aim to clarify some of the overlap between the fields of memory modeling and statistical learning, by integrating and formalizing a set of theoretical connections between Hopfield networks, the SDM, kernel machines, and neuron models with non-linear dendritic processing. ⇤Joint senior authors. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1.1 Our contribution • We derive a set of normative kernel-based models that describe the general mathematical structure of feed-forward (i.e., hetero-associative) and recurrent (i.e., auto-associative) memory networks that can perform error-free recall of a given set of patterns with maximal robustness to noise. • We show that the normative models include, as special cases, the classical and modern Hopfield network, as well as the SDM. • We derive a simple attractor network model for storing an exponential number of continuous-valued patterns with a finite basin of attraction. We discuss its similarity to attention. Furthermore, we explain how classifiers with non-linear kernels can be interpreted as general forms of neuron models with non-linear dendritic activation functions and synaptic cross-talk. 1.2 Related work Our work is primarily related to [8, 9, 35, 36, 45, 57]. While MHNs are extensively analyzed in [35, 36, 45, 57], the approach is energy-based and makes no statements about the relation between MHNs and kernel methods; a brief comment in [57] mentions some similarity to SVMs, but this is not further explained. The work by [8] focuses on the SDM and its connection to attention. It observes that the classical Hopfield network is a special case of the SDM, but no further generalization is made, and kernel methods are not mentioned. In our work, we place MHNs and the SDM in a broader theoretical context by showing that both models are special suboptimal cases of a family of memory networks that can be derived with a normative kernel-based approach. 2 Background Consider the following simple model of hetero-associative memory: a single-layer feed-forward network consisting of a single output neuron connected to Nin inputs with the weights w 2 RN . The output sout 2 {±1} is given by sout = sgn h w> (sin) ✓ i (1) where sin is the input vector (also called query), ✓ the threshold, and a function that maps the “raw” input to a N -dimensional feature space, where typically N Nin. Suppose that we are given a set of M input-output patterns {⇠µin, ⇠ µ out} M µ=1, in which every entry ⇠ is randomly drawn from {±1} with sparseness f := P(⇠=1). In order for the neuron to store the patterns in a way that maximizes the amount of noise it can tolerate while still being able to recall all patterns without errors, one needs to find the weights that produce the output ⇠µout in response to the input ⇠ µ in, 8µ, and that maximize the smallest Euclidean distance between the inputs and the neuron’s decision boundary. Using Gardner’s formalism [20, 22], this problem can be expressed as argmax w s. t. ⇠µout ⇣ w> (⇠µin) ✓ ⌘ , 8µ kwk2 = w̄ (2) where w̄ > 0 is a constant. This is equivalent to solving min w kwk2 s. t. ⇠ µ out ⇣ w> (⇠µin) ✓ ⌘ 1, 8µ (3) which can be directly identified as the support vector machine (SVM) problem for separable data [13]. The solution to Eq. 3 can today be found in any textbook on basic machine learning methods, and yields an optimal output rule that can be written in a feature and kernel form sout = sgn 2 4 MX µ ↵µ⇠µout (⇠ µ in) > (sin) ✓ 3 5 = sgn 2 4 MX µ ↵µ⇠µoutK(⇠ µ in, sin) ✓ 3 5 (4) where we, in the latter expression, have used the “kernel-trick” K(xi,xj) = (xi)> (xj). The solution depends on the Lagrange coefficients ↵µ 0, many of which are typically zero. Patterns with ↵µ > 0 are called support vectors. 3 Kernel memory networks for binary patterns 3.1 Hetero-associative memory as a feed-forward SVM network We begin by considering a hetero-associative memory network with an arbitrary number Nout output neurons, whose combined state we denote sout. In order for the network as a whole to be able to tolerate a maximal level of noise and still successfully recall its stored memories, we solve Eq. 3 for each neuron independently. As each neuron can have a different classification boundary along with a different set of support vectors, its weights will, in general, be characterized by an independent set of M Lagrange coefficients. To simplify the notation, we represent these coefficients ↵µi , across neurons i and patterns µ, as entries in the matrix A, where (A)iµ = ↵µi . We also combine all thresholds in the vector ✓ = (✓1, . . . , ✓Nout), and all input and output patterns as columns in the matrices Xin = (⇠1in, . . . , ⇠Min ) and Xout = (⇠1out, . . . , ⇠Mout). Finally, we assume that all neurons have the same feature map, so that i = , 8i (see Fig. 1). All functions are applied column-wise when the argument is a matrix, for example (Xin) = ( (⇠1in), . . . , (⇠Min )). The optimal response of the network can now be compactly summarized as follows. Property 1 (Robust hetero-associative memory network). A single-layer hetero-associative memory network trained to recall the patterns Xout in response to the inputs Xin with maximal noise robustness, has an optimal output rule that can be written as sout = sgn h (A Xout) (Xin) > (sin) ✓ i (feature form) (5) = sgn ⇥ (A Xout)K(Xin, sin) ✓ ⇤ (kernel form) (6) where denotes the Hadamard product. 3.2 Auto-associative memory as a recurrent SVM network The hetero-associative network can be made auto-associative by setting Nout = Nin and Xout = Xin. The network is now effectively recurrent, as each neuron can serve both as an input and output simultaneously (see Fig. 1). Consider a recurrent network with N neurons, whose state at time point t is denoted s(t) 2 {±1}N , and whose dynamics evolve according to the update rule s(t+1)i = sgn h w>i (s (t)) ✓i i (7) where wi 2 RN is the weight vector to neuron i = 1, . . . , N . In order to make the patterns {⇠µ}Mµ=1 fixed points of the network dynamics, we train each neuron i independently on every pattern µ to, again, produce the response ⇠µi when the rest of the network is initialized in ⇠ µ. Moreover, we maximize the amount of noise that can be tolerated by the network while maintaining error-free recall by maximizing the smallest Euclidean distance between each neuron’s decision boundary and its inputs. This maximizes the size of the attractor basins [18, 32]. The problem of training the entire network is, in this way, transformed into the problem of training N separate classifiers according to min wi kwik2 s. t. ⇠ µ i ⇣ w>i (⇠ µ) ✓i ⌘ 1, 8µ, i . (8) The solution can be obtained by slightly modifying Property 1, and is stated below. Property 2.1 (Robust auto-associative memory). A recurrent auto-associative memory network trained to recall the patterns X with maximal noise robustness has an optimal synchronous update rule that can be written as s(t+1) = sgn h (A X) (X)> (s(t)) ✓ i (feature form) (9) = sgn h (A X)K(X, s(t)) ✓ i (kernel form) (10) Remark. With a linear feature map (x) = x, the optimal update is reduced to s(t+1) = sgn h (A X)X>s(t) ✓ i (11) where (A X)X> can be identified as the general form of the optimal weight matrix. The solution described by Property 2.1 does not, in general, prohibit a neuron from having selfconnections. Applying this constraint yields the following result. Property 2.2 (Robust auto-associative memory without self-connections). A recurrent autoassociative memory network without self-connections, with the inner-product kernel K(xi,xj) = k(x>i xj), that has been trained to recall the patterns X with maximal noise robustness, has an optimal asynchronous update rule that can be written in the kernel form s(t+1)i = sgn 2 64 MX µ ↵µi ⇠ µ i k 0 @ NX j 6=i ⇠µj s (t) j 1 A ✓i 3 75 . (12) Storage capacity. An intuition for the storage capacity scaling of the hetero- and auto-associative memory networks can be gained by observing that the network as a whole will be able to successfully recall patterns as long as each neuron is able to correctly classify its inputs (or is very unlikely to produce an error). The capacity of the network can thereby be derived from the capacity of each individual neuron. It is well-known that a linear binary classifier can learn to correctly discriminate a maximum of Mmax ⇡ 2DVC random patterns, where DVC is the Vapnik-Chervonenkis dimension of the classifier [15, 20, 40, ch. 40]. For a neuron with N inputs and a linear feature map (x) = x, this results in DVC = N and, thus, the capacity Mmax ⇡ 2N . Suppose, on the other hand, that the kernel is a homogeneous polynomial of degree p, so that K(xi,xj) = (x>i xj)p. In this case, will contain all monomials of degree p composed of the entries in x. As there are O(Np) unique p-degree monomials (see Appendix A.1), the input dimensionality and Mmax will be O(Np). For the exponential kernel, which we can write as K(xi,xj) = exp(x>i xj) = P1 p=0(x > i xj) p/p!, the dimensionality of will be PN p=0 N p = 2N , which yields Mmax ⇠ O(eN ). Special cases. In the following sections, we will show that many of the models of hetero- and auto-associative memory that have been proposed over the past years are special cases of the solutions in Properties 1, 2.1, and 2.2, characterized by specific choices of A, , and K. 3.3 Kanerva’s sparse distributed memory is a feed-forward SVM network The sparse distributed memory (SDM), developed by Kanerva [30], is one of the most famous examples of a hetero-associative memory model. It has lately received much attention in the context of generative memory models [69] and attention layers in transformers [8]. The SDM consists of a register of N memory slots, each associated with an address zi 2 {±1}Nin , i = 1, . . . , N . All addresses are listed as rows in the matrix Z = (z1, . . . , zN )>. The content of each slot is represented by an Nout-dimensional vector, initialized at zero. Suppose that we wish to store the M patterns Xout = (⇠1out, . . . , ⇠Mout) in the addresses Xin = (⇠1in, . . . , ⇠Min ), where all entries are random and bipolar. The basic idea of the SDM is to write the data to, and later read it from, multiple memory slots at once (hence the distributed storage); this ensures a degree of noise-robustness. In mathematical terms, the read-out of the SDM provided with a query sin, is given by sout = sgn h Xout ⇥(ZXin b) > ⇥(Zsin b) i (13) where ⇥ the Heaviside function with bias b = Nin 2r, and r is a parameter that determines the precision of the writing and reading process. Upon comparing Eqs. 13 and 5, the SDM can be directly identified as a special case of a suboptimal feed-forward SVM network in the feature form, with A = 1, ✓ = 0, and the feature map SDM(x) = ⇥(Zx b). When viewed as a kernel method, the function of the SDM is to store the dense addresses Xin as sparse high-dimensional representations SDM, to make it easier to later determine the slots closest to a query sin, and retrieve the relevant data. Capacity. As the SDM is linear in SDM, with DVC ⇡ N , it follows from the analysis in Sec. 3.2 that one should expect the capacity to scale as Mmax ⇠ O(N ). Moreover, one should expect a proportionality constant ⇠0.1, since the SDM is suboptimal relative to the feed-forward SVM network, analogously to how the classical Hopfield network is suboptimal relative to the recurrent SVM network (see Sec. 3.4). This is consistent with earlier proofs [12, 31]. Kernel of an infinite SDM. In practice, an SDM with a large number of memory slots N requires calculations involving a large address matrix Z. This can be avoided by applying the kernel-trick to Eq. 13 in the limit N ! 1, which allows for the output to be computed with sout = sgn ⇥ XoutKSDM(Xin, sin) ⇤ (14) where we have defined the kernel as KSDM(xi,xj) = lim N !1 SDM(xi)> SDM(xj) N (15) in order to ensure convergence. In this section, we will derive this kernel for two different variants of the SDM and demonstrate that both are translation-invariant. It is interesting to note here that SDM is equivalent to a single-layer neural network with N neurons, weights Z, and bias b. This means that KSDM is equivalent to the kernel of an infinitely wide neural network [11, 47, 67]. We begin by noticing that SDM(x) has a geometrical interpretation [8, 31]. It is a binary vector that indicates those memory addresses in Z that differ by at most r bits compared to x. For any two bipolar vectors z and x, the bit-wise difference can be computed as 12 |z x| = 1 4kz xk 2 2. This means that SDM(x) indicates all addresses that lie within a sphere centered at x with radius 2 p r. Consequently, the inner product SDM(xi)> SDM(xj) is the number of addresses located in the overlapping volume of two spheres centered at xi and xj . Although an exact calculation of this quantity can be found in [8, 30], its connection to the SDM kernel has, to the best of our knowledge, not previously been made. We therefore modify the previously published expression with a normalization factor 1/2Nin and state the following property. Property 3.1 (Kernel of an infinite SDM on the hypercube). In the limit N ! 1, the kernel of an SDM with N memory slots, whose addresses are randomly drawn from {±1}Nin , is given by KSDM(xi,xj) = 1 2Nin Nin X i=Nin r b 2c (Nin r i)X j=[Nin r i]+ ✓ Nin i ◆ · ✓ j ◆ (16) where r is the bit-wise error threshold and is the bit-wise difference between xi and xj , given by = 12 |xi xj | = 1 4kxi xjk 2 2. The SDM can also be implemented with continuous addresses, randomly placed on a unit hypersphere of (Nin 1) dimensions, denoted SNin 1. The vector SDM(x) now indicates all addresses that lie within a hyperspherical cap centered at x with an angle arccos(b) between its central axis and the rim. The inner product SDM(xi)> SDM(xj) is the number of addresses located in the overlapping area of two spherical caps centered at xi and xj . While a calculation of this quantity, again, can be found in [8], it has not previously been connected to the kernel of an SDM. We simplify the previously published result and also derive a closed-form approximation, valid for highly sparse SDM (see Appendix B for details). The results are summarized below. Property 3.2 (Kernel of an infinite SDM on the hypersphere). In the limit N ! 1, the kernel of an SDM with N memory slots, whose addresses are randomly drawn from SNin 1, is given by KSDM(xi,xj) = Nin 2 2⇡ Z ↵b ↵x sin(')Nin 2B " 1 tan2(↵x) tan2(') ; Nin 2 2 , 1 2 # d' (17) where ↵x = 12 arccos(x > i xj), ↵b = arccos(b), and B is the incomplete Beta function. In the highly sparse regime, when 0.9 . b < 1 and 1N k SDMk0 ⌧ 1, the kernel can be approximated with KSDM(xi,xj) ⇡ b̂Nin 1 2⇡ B " 1 ✓ b̂ ◆2 ; Nin 2 , 1 2 # (18) where = 12kxi xjk2 and b̂ = sin(arccos(b)). In conclusion, an infinitely large SDM with sparse internal representations SDM, can be represented as a suboptimal case of a feed-forward SVM network with a translation-invariant kernel. 3.4 The modern Hopfield network is a recurrent SVM network The Hopfield network [26] is, arguably, the most well-known model of auto-associative memory. In its modern form [35], it is a recurrent network of N neurons with the state s(t), whose dynamics are governed by the energy and state update rule E = MX µ F 0 @ NX i ⇠µi s (t) i 1 A, s(t+1)i = sgn 2 64 MX µ ⇠µi F 0 0 @ NX j 6=i ⇠µj s (t) j 1 A 3 75 (19) where F is a smooth function, typically a sigmoid, polynomial, or exponential. This “generalized” Hopfield model has a long history [see, e.g., 1, 21, 25, 37] but has received renewed attention in recent years under the name modern Hopfield network (MHN) or dense associative memory [17, 35]. By comparing Eq. 19 with Eq. 12, the state update of the MHN can be identified as a special case of a suboptimal recurrent SVM network in the kernel form, with k = F 0, A = 1, and ✓ = 0 (since f = 0.5). With a linear F 0(x) = x, the MHN reduces to the classical Hopfield network, which is a special case of the recurrent SVM network with the linear kernel k(x>i xj) = x>i xj . Capacity. The storage capacity of the MHN has been shown to depend on the shape of F 0. In the linear case, the capacity is famously limited to ⇠0.1N patterns, depending on the precision of retrieval [2, 42]. If, on the other hand, F 0 is polynomial with degree p, the capacity scales as Mmax ⇠ O(Np) [35], while an exponential F 0 endows the network with a capacity Mmax ⇠ O(eN ) [17]. From the perspective of the kernel memory framework, this scaling directly follows from the analysis in Sec. 3.2 with k = F 0. In fact, in the regime of low errors, the kernel memory framework can also be used to derive a more precise capacity scaling for the classical Hopfield network. We first note that any one-shot learning rule that implies A > 0 is equivalent to an SVM network where every stored pattern is a support vector. Such a heuristic is only likely to be close to the optimal solution and perform well in large networks with very few patterns, as high-dimensional linear SVMs trained on few patterns are highly likely to find solutions where all patterns are support vectors; this effect has been termed support vector proliferation [4]. Restricting the network to this regime limits the capacity to Mmax ⇠ O( N2 logN ), consistent with the result in [42] (see Appendix A.2). Iterative learning rules. The problem of iteratively training MHNs with biologically plausible online learning rules has recently been studied [64], with a resulting storage capacity ranging from ⇠0.16N to ⇠N , depending on the exact implementation. The aim, in general, of such studies is to find a learning rule capable of producing a capacity close to the theoretical maximum ⇠2N . For this purpose, the perspective of kernel memory networks can be particularly helpful, as many of the algorithms that have been developed over the past two decades to optimize SVMs can be utilized for MHNs as well. For example, a network formulated in the feature form can be trained with the stochastic batch perceptron rule [14, 34], the passive aggressive rules [16], the minnorm rule [5], as well as with likelihood maximization applied to logistic regression [29, 46, 59]. In the kernel form, two of the most well-known online algorithms for training linear and non-linear SVMs are the Adatron [3] and the Kernel-Adatron [19]. A performance comparison between iterative learning and the modern Hopfield learning rule can be found in Appendix C. Generalization. Viewing the MHN as a recurrent network of SVMs can also facilitate a more intuitive understanding of its ability to generalize, when used as a conventional classifier. In this setting, one designates a subset of the neurons as input units, and the remaining neurons as outputs. Given a set of input-output associations, one optimizes the memory patterns ⇠µ using, for example, gradient descent. Such an experiment was performed by Krotov and Hopfield [35] on the MNIST data set, using a polynomial non-linearity F (x) = xp. Results showed that the test error first improved as p increased from 2 to 3, but later deteriorated for high degrees, like p = 20. While it may be difficult to explain this behavior within an energy-based framework, it is entirely expected when viewed from the SVM perspective: a kernel of low polynomial degree has too few degrees of freedom to fit the classification boundary in the training set, causing underfitting, while a polynomial of too high degree grants the model too much flexibility, which results in overfitting. The pseudoinverse learning rule. The coefficients in A are, in general, computed numerically, and cannot be written in closed form. However, in the special case when Eq. 8 is underdetermined, meaning M < N , a closed-form (but suboptimal) solution can be obtained using the least-squares SVM method [60]. The result is a generalized form of the pseudoinverse learning rule [50]. See Appendix D for details. 4 Kernel memory networks for continuous patterns 4.1 Auto-associative memory as a recurrent interpolation network So far, we have considered memory models designed to store only bipolar patterns. We now relax this constraint and allow patterns to be continuous-valued. We first observe that any set of patterns X 2 RN⇥M can be made fixed points of the dynamics by training each neuron i to interpolate ⇠µi when the rest of the network is initialized in ⇠µ, for every pattern µ. Assuming that the model is equipped with a kernel that allows for each fixed point to also be attracting, we can ensure that a lower bounding estimate of the size of the attractor basin is maximized by finding the interpolation with minimum weight norm (see Appendix E.1 for proof). These results are summarized below. Property 4 (Robust auto-associative memory with continuous patterns). Suppose that the dynamics of a recurrent auto-associative memory network evolve according to the synchronous update rule s(t+1) = XK†K(X, s(t)) (20) where K = K(X,X) = (X)> (X) is the kernel matrix and K† its Moore-Penrose pseudoinverse, where K† = K 1 if (X) is full column rank. Then, the dynamics of the network is guaranteed to have the fixed points X. Moreover, if the points are attracting, Eq. 20 maximizes a lower bound of the attractor basin sizes. 4.2 A recurrent interpolation network with exponential capacity Memory models for continuous data [e.g., 27, 33, 48] have generally received less attention than their binary counterparts. Recently, however, Ramsauer et al. [57] proposed an energy-based model capable of storing an exponential number of continuous-valued patterns (we will refer to this model as the softmax network). While the structure of this model is similar to Eq. 20, it cannot be analyzed within the framework of Property 4, as it involves a kernel that is neither symmetric nor positive-definite [68]. Nonetheless, we will in this section demonstrate that it is possible to use conventional kernel methods to design an attractor network with exponential capacity for continuous patterns. We utilize the properties of the SDM by using a translation-invariant kernel with a fixed spatial scale r. For the sake of simplicity, we choose the exponential power kernel (Exp ) Kexp (xi,xj) = exp " ✓ 1 r kxi xjk2 ◆ # (21) where , r > 0. These parameters determine the shape of the attractor basin that surrounds each pattern. While r roughly sets the radius of attraction, represents an inverse temperature which changes the steepness of the boundary of the attractor basin. Moreover, as long as the patterns are unique, the kernel matrix is invertible and we have K†exp = K 1 exp [44]. We will now analyze the noise robustness and storage capacity of this model. To make the analysis tractable, we will operate in the regime of low temperatures, meaning the limit ! 1. We first establish the following three properties. Property 5.1 (The Exp network at zero temperature). Given a set of unique patterns {⇠µ}Mµ=1 with minµ,⌫ 6=µk⇠µ ⇠⌫k2 > r, the state update rule for the Exp network at ! 1 reduces to s(t+1) = X⇥(r kX s(t)k2) (22) where ⇥(·) is the Heaviside function with ⇥(0) = e 1 (see Appendix E.2.1). Remark. In geometrical terms, Property 5.1 states that the boundary of the basin of attraction surrounding each pattern becomes a sharp (N 1)-dimensional hypersphere with radius r in the limit ! 1. For lower, finite , the spherical boundary becomes increasingly fuzzy. From the perspective of an energy landscape, each pattern lies in an N -dimensional energy minimum with infinitely steep walls when ! 1. As is lowered, the barriers become progressively smoother. Property 5.2 (Convergence in one step). Given a set of unique patterns {⇠µ}Mµ=1 with minµ,⌫ 6=µk⇠µ ⇠⌫k2 > 2r, the Exp network at ! 1, initialized at s(0) = ⇠µ + ⇠, will converge to ⇠µ in one step if k ⇠k2 < r. Property 5.3 (No spurious attractors). Given a set of unique patterns {⇠µ}Mµ=1 with minµ,⌫ 6=µk⇠µ ⇠⌫k2 > 2r and @µ : k⇠µk2 = r/(1 e 1), the only attractors of the dynamics of the Exp network at ! 1 are the points {⇠µ}Mµ=1, together with 0 if @µ : k⇠µk2 r. Remark. Properties 5.2 and 5.3 can be shown to be true simply by inserting the expression s(0) = ⇠µ + ⇠ in Eq. 22. Assuming no overlaps between the basins of attraction, a quick calculation shows that s(1) = ⇠µ if k ⇠k2 < r. If, on the other hand, the network is initialized such that ks(0) ⇠µk2 > r, 8µ, one always obtains s(2) = ⇠0, where ⇠0 is either 0 or the pattern closest to 0. In other words, the network recalls a pattern only if the initialization is close enough to it. If located far from all patterns, the network assumes an “agnostic” state, represented either by the origin or the pattern closest to the origin (if the origin happens to be located within a basin of attraction). In the following two properties, we evaluate how the radius of attraction r determines the maximum input noise tolerance and storage capacity. Property 6 (Robustness to white noise). Assume that we are given a set of unique patterns ⇠1, . . . , ⇠M ⇠ N (0, IN ) with minµ,⌫ 6=µk⇠µ ⇠⌫k2 > 2r, and that the Exp network is initialized in a distorted pattern s(0) = ⇠µ + ✏, where ✏ ⇠ N (0, 2IN ). Then, at ! 1, the maximum noise variance 2max with which ⇠µ can be recovered in at least 50% of trials is 2max = r 2/N . (23) Property 7 (Exponential storage capacity). At ! 1, and for N 1, the average maximum number of patterns sampled from N (0, IN ) that the Exp network can store and recall without errors is lower-bounded according to Mmax q 2 p ⇡N(1 2 2max) exp " N(1 2 2max) 2 8 # (24) where 2max is the maximum white noise variance tolerated by the network. Remark. Proofs can be found in Appendices E.2.2 and E.2.3. Note that Property 7 is valid in the range 2max . 1/2. While the bounds are fairly tight at the upper end of the range, they become loose when 2max ! 0. In this limit, which is equivalent to r ! 0, the storage capacity tends to infinity, as the risk of interference between patterns vanishes when their radius of attraction becomes infinitesimal. Comparison to the softmax network. If patterns are randomly placed on a hypersphere instead of being normally distributed, the state update rule in Eq. 22 reduces to the form s(t+1) = X⇥(X>s(t) ✓), where ✓ is a fixed threshold. While the capacity remains exponential (see Appendix E.3.1), the basin of attraction surrounding each pattern now forms a spherical cap instead of a ball. We can compare this to the softmax network at zero temperature, given by s(t+1) = lim !1 X softmax( X>s(t)) = X argmax(X>s(t)). This model differs from the Exp only in a replacement of ⇥ with argmax. This changes the shape of the attractor basins from spherical caps to Voronoi cells, which parcellate the entire surface of the hypersphere into a Voronoi diagram (see Fig. 2). The boundary of each basin is now no longer radially symmetric around a pattern, but instead extends as far as possible in all directions. Consequently, at ! 1, the softmax network has larger attractor basins and always converges to one of the stored patterns, regardless of the initialization point (assuming this is not precisely on a boundary). In contrast, the Exp network may converge to the origin if initialized far from all patterns. This can be interpreted as an agnostic response, which indicates that the model cannot associate the input query with any of its stored patterns. 5 Discussion Biological interpretation. Kernel memory networks can be mapped to the anatomical properties of biological neurons. Consider an individual neuron in the feature form of the recurrent network (Eq. 9). The state of neighboring neurons s is first transformed through (s) and thereafter projected to the neuron through the weight matrix (A X) (X)>. When the kernel is polynomial of degree p, so that K(xi,xj) = (x>i xj + 1)p, the transformation (s) consists of all elements in s and their cross-terms, up to degree p. The input to each neuron, in other words, consists of the states of all other neurons, as well as all possible combinations of their multiplicative interactions. This neuron model can be viewed as a generalized form of, for example, the multiconnected neuron [49], the clusteron [43], or the sigma-pi unit [58, p. 73]. These are all perceptrons that include multiplicative input interactions as a means to model synaptic cross-talk and cluster-sensitivity on non-linear dendrites [55] (see Fig. 1). In the kernel form (Eq. 10), each neuron is, again, implicitly comprised of a two-stage process, whereby the raw input s is first transformed through the function K(X, s) and then projected through the weight matrix A X. For any inner-product kernel K = k(x>i xj), this representation can be directly identified as a two-layer neural network, where the hidden layer is defined by the weights X and the activation function k. This interpretation of the recurrent network was recently proposed in [35, 36] and discussed in relation to hippocampal-cortical interactions involved in memory storage and recall; it is particularly reminiscent of the hippocampal indexing theory [6, 61]. However, the kernel form can also be viewed as a network in which each individual neuron is a generalized form of the two-layered pyramidal cell model [53, 54]. This was originally proposed as an abstract neuron model augmented with non-linear dendritic processing [41]. It should be noted, however, that the idea of interpreting kernel methods as neural networks has a longer history, and has been extensively analyzed in the case of, for example, radial basis functions [51, 52]. For further details, see Appendix F. Summary. We have shown that conventional kernel methods can be used to derive the weights for hetero- and auto-associative memory networks storing binary or continuous-valued patterns with maximal noise tolerance. The result is a family of optimal memory models, which we call kernel memory networks, which includes the SDM and MHN as special cases. This unifying framework facilitates an intuitive understanding of the storage capacity of memory models and offers new ways to biologically interpret these in terms of non-linear dendritic integration. This work formalizes the links between kernel methods, attractor networks, and models of dendritic processing. Future work. A unifying theoretical framework for memory modeling can be useful for the development of both improved bio-plausible memory models and for machine learning applications. First, recognizing that there exists algorithms for training optimally noise-robust classifiers and adapting these to biological constraints can aid in the development of normative synaptic three-factor learning rules [23]. Second, the theoretical link between neuron models, kernel functions, and storage capacity enables one to fit kernel memory networks to neurophysiological data and to analyze the computational properties of biophysically informed memory models. Finally, our unifying framework reveals that most memory models differ only in the choice of kernel (model complexity) and Lagrange parameters (model precision). This categorization simplifies the tailoring of memory models to their application, and allows for the design of models whose properties fundamentally can depart from kernel memory networks, by, for example, choosing kernels not associated with a reproducing kernel Hilbert space. Acknowledgments and Disclosure of Funding This study was supported by funding from the Swiss government’s ETH Board of the Swiss Federal Institutes of Technology to the Blue Brain Project, a research center of the École Polytechnique Fédérale de Lausanne (EPFL).
1. What is the focus and contribution of the paper on neural network design? 2. What are the strengths of the proposed approach, particularly in terms of its mathematical structures and derived attractor network model? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with prior works? 4. Do you have any concerns or questions regarding the proposed networks' ability to store patterns with maximal noise robustness? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work pursued a classical yet interesting problem that designing a neural network to store a set of patterns with maximal noise robustness, and presented alternative understandings of this problem by investigating several kernel attractor networks. The claimed contributions can be summarized as follows: proposed two mathematical structures of feedforward and recurrent memory networks for binary patterns. show that the normative models include, as special cases, the classical and modern Hopfield network, as well as the SDM. derived a simple attractor network model for storing an exponential number of continuous-valued patterns with a finite basin of attraction. Strengths And Weaknesses Strengths: This paper is organized clearly, and the authors provided many plots for illustrating the structure of the proposed networks. The derivation procedure of the feedforward and recurrent memory networks are easy to follow. The result about storing an exponential number of continuous-valued patterns with a finite basin of attraction seems to be a laudable performance. Weaknesses: Limited to my knowledge, this paper is too hard to follow and comment. Here, I list the most concerned points, which may hinder the acceptance. Is there something over-claimed? For example, the discussions about its similarity to attention and new biological interpretations seem only to give some understandings from the perspectives of attention and other fields with slight analysis. It would be better to provide detailed and solid support for this claim. There lacks some clear-out and comparative results between the proposed networks and the previous ones. For example, what are the advantages of this framework that the authors proposed here. Questions What is the definition or formulation of the maximal noise robustness. It is difficult to get any noise or robustness information from Property 1, 2.1, and 2.2 unless I missed something. Limitations Nothing mentioned.
NIPS
Title CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers Abstract Development of transformer-based text-to-image models is impeded by its slow generation and complexity, for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel autoregressive generation. We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, a cross-modal general language model (CogLM), and finetune it for fast super-resolution. The new text-to-image system, CogView2, shows competitive generation performance to the concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images. A lion man is typing in the office. A beautiful girl is hugging a husky. A lion teacher wearing a suit is in front of a blackboard. A robot is riding under the blue and cloudy sky. Several youths are talking in a bar. A young woman is taking photos. A tiger with angel’s wings. A girl holding an oil-paper umbrella in a rainy lane. Earth in the Eye. A magnificent church. Sketch. Mount Fuji, cherry blossom and Akita dog. Oil painting. A pirate captain with a skull. Figure 1: Text-to-Image samples from CogView2, which supports both Chinese and English. The actual input text is in Chinese, translated into English here for better understanding. Codes and a demo website will be updated at https://github.com/THUDM/CogView2. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). N/A 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1 Introduction Recently, text-to-image generation has been greatly advanced by large-scale pretrained transformers, e.g. DALL-E [26] and CogView [3]. These models learn to generate image tokens in an autoregressive way. However, they also suffer from the following disadvantages: Slow generation. Generation of autoregressive models usually is much slower than generation of non-autoregressive models, e.g. GANs [10], with the same FLOPs. Instead of employing a large number of parameters, this shortcoming is mainly attributed to the nature of token-by-token generation used in the autoregressive models cannot exploit the parallel computing ability of GPUs, even after caching hidden states [25]. This is a significant limitation. Expensive high-resolution training. The current large-scale pretrained models are generally based on Transformers [30], where the attention operation has both time and space complexity of O(n2) for training sequences of length n. Within a limited budget, we face a trade-off between the number of parameters, representing the modeling power, and the resolution of the generated images. For this reason, most current text-to-image models choose a resolution of 32⇥ 32 tokens (usually 256⇥ 256 pixels) [3, 26, 11], which is far less dense than the resolution of the real photos. Unidirectionality. For images, autoregressive models, e.g. GPTs, usually generate tokens in rasterscan order. This order shows the best perplexity during the evaluation [7]. However, this order makes the models unaware of the tokens below or on the right side during generation, as a result text-guided infilling is not supported. Moreover, the unidirectionality leads to a gap between the pretrained text-to-image models and vision transformers (ViTs) [5] based on bidirectional masked prediction, e.g. MAE [12] and SimMIM [34]—limiting their application on traditional visual tasks, such as image classification and object detection. Present Work. To overcome these defects, we first propose a simple and versatile pretraining method, a Cross-Modal general Language Model (CogLM). Our CogLM masks various types of tokens in the sequence of text and image tokens, and learns to predict them autoregressively. Specifically, (1) if we mask all the image tokens, the task becomes the same as the original CogView [3] in performing a text-to-image generation task; (2) if we mask random patches of image tokens, it works similarly to MAE as an infilling task; (3) if we mask text tokens, the task becomes image captioning. The versatility of CogLM enables us to fine-tune a pretrained CogLM for different downstream tasks, and constructs a hierarchical model, CogView2.There are three steps in the hierarchical generation process as follows: 1. First, we generate a batch of low-resolution images (20⇥ 20 tokens in CogView2) using the pretrained CogLM, and then (optionally) filter out the bad samples based on the perplexity of CogLM image captioning, which is the post-selection method introduced in CogView [3]. 2. The generated images are mapped into 60⇥ 60-token images by a direct super-resolution module fine-tuned from the pretrained CogLM. We use local attention implemented by our customized CUDA kernel to reduce the training expense. The high-resolution images from this step usually have inconsistent textures and lack details. 3. These high-resolution images are refined via another iterative super-resolution module finetuned from the pretrained CogLM. Most tokens are re-masked and re-generated in a local parallel autoregressive (LoPAR) way, which is much faster than the original autoregressive generation. How does CogView2 conquer the three defects? First, during pretraining the masked patch prediction task trains CogLM to handle bidirectional context, making it easy to adapt to bidirectional tasks, such as direct and iterative super-resolution. Second, the hierarchical design allows us to care only about the local coherence at a high-resolution level. In this way, the local attention can be leveraged to reduce the training expense. Third, the local parallel autoregressive generation can reduce model run times from 3,600 to 6 (1/600 only), significantly accelerating the generation of high-resolution images. CogView2 is about 10⇥ faster than the CogView (with sliding-window super-resolution) for generating images of similar resolution and better quality. Image) is the separator token. Mask regions are sampled according to different strategies. Only the second-to-last tokens in the mask regions are predicted to compute the loss. (Right) The mask regions are only implemented by changing the attention mask matrix, without any modification on the input tokens. In the attention mask matrix, rows and columns of all the masked tokens (the 2,3,4,6,7,8 rows and columns) can be extracted together to form a low-triangle attention mask matrix. 2 Related Work Text-to-image generation for arbitrary inputs is a long-held dream for many cross-modal machinelearning researchers. Most early attempts to address this challenge were based on Generative Adversarial Nets [10]; these include AttnGAN [35], DM-GAN [40], DF-GAN [28], et al. Although they can perform vivid synthesis on domain-specific datasets, such as Caltech-UCSD Birds 200, general-domain datasets, such as MS COCO [17], present great challenges for these methods. DALLE [26], CogView [3] and similar works [33, 8] leverage VQ-VAE [29] to compress an image to a sequence of discrete tokens and pretrain large transformers for autoregressive generation, greatly improving results in the general domain. LAFITE [39] learns to invert the pretrained CLIP [23] embeddings in the shared space of text and image for text-free training. Recently, many researchers have turned to diffusion models, largely due to the slow generation defect of autoregressive models. One example is Glide [19]. Non-autoregressive generation (NAR) is recently a popular topic in natural language generation— see Mask-Predict [9] and GLAT [21], which explores parallel decoding methods for autoregressivelike models. Generation speed was not an issue in the era when GANs dominated the image generation, but constitutes a considerable challenge for current autoregressive text-to-image models. M6-UFC [38] first introduces NAR methods into the VQ-VAE framework, and similar ideas are adopted by VQ-diffusion [11] and MaskGIT [1]. A possible drawback of pure NAR methods is that tokens sampled at the meantime might lead to global inconsistency in later steps during the generation of complex scenes. Our method introduces a hierarchical design to combine the consistency merit of autoregressive models and the speed advantage of NAR methods. 3 Method 3.1 The Cross-Modal General Language Model While previous self-supervised pretext tasks often target at mask prediction in the computer vision [34, 12], our approach pursues a unification of autoregressive generation and bidirectional context-aware mask prediction. In NLP, the General Language Model (GLM) [6] suggests changing the direct mask prediction into blockwise autoregressive generation. However, directly applying it to images would result in redundancy. For instance, the sizes of the masked image patches are fixed, thus we do not need the capacity of filling blocks of indefinite length as in NLP. Moreover, GLM inserts a sentinel token for each mask region to predict its first token, which greatly increases the sequence length thus restricts the usage of 2D local attention. Based on the analysis above, we present a simpler and more general language model for both text and image data—Cross-modal general Language Model (CogLM). As shown in Figure 2, CogLM takes as input a concatenation of text and images tokenized by icetk 1 (See § 3.2), whose dictionary contains 20,000 image tokens and 130,000 text (both Chinese and English) tokens. Formally, let t = [t1, ..., tM ] be the text tokens and im = [im1, ..., imN2 ] be the image tokens, where M and N2 are the lengths of text and image tokens respectively. The crucial step in CogLM is to sample k mask regions R = {[l0, r0], ..., [lk, rk]} according to various strategies. In practice, the following two strategies are used: • (Text-to-Image GPT) The input sequence is x = [t [BOI] im ]. We mask all the image tokens, which is similar to the pretraining task of CogView [3]. • (A Combination of Mask Prediction and Image Captioning) The input sequence is x = [im0 ... imi ... imj ... imN2 [BOE/C] t ], where [BOE],[BOC] are separators meaning beginning-of-English and beginning-of-Chinese used for the corresponding language. we mask random patches and the text tokens. Ideally, the two tasks should be separated; but we combine them together for training efficiency. Instead of replacing the tokens in the mask regions as [MASK], we make no change in the input but build an attention mask A based on the mask regions. All tokens outside mask regions are seen as context and can be attended to by all other tokens. A token in mask regions can only be attended to by the tokens in mask regions and behind it. Specifically, A[i, j] = 8 < : 1, if 8 [lu, ru] 2 R, j /2 [lu, ru], 1, if j i and 9 u, v (indices), i 2 [lu, ru] 2 R, j 2 [lv, rv] 2 R, 0, else. (1) Figure 2 shows an example of the attention mask matrix of two mask regions. In the mask regions, the model learns to predict the next token. The loss function can be written as follows: L = 1P u ru lu X v rv 1X i=lv log p(xi+1|xi, xcontext), (2) where the xcontext denotes the tokens outside the mask regions. Infilling. Note that the first token in each mask region is not predicted during training. This feature seems to disable CogLM from image infilling or cloze filling in natural language, but this problem actually has a simple solution. During inference, we can move the last context token before each mask region into it, as illustrated in Figure 3. Although these moved tokens becomes blind spots for mask regions before them, they have few negative effects in practice. To further avoid this minor influence and fully maintain the context information, we deal with each mask region individually. For each region, we move only the last context token before this region, and keep all the known tokens outside the mask regions. Thus, we cannot use the cached hidden states from the previous region, slightly slowing down the multi-region infilling. See Appendix A for samples. Advantages over GPT [22], GLM [6] and MAE [12]. (GPT) The main advantage over GPT is that the modeling of bidirectional contexts is considered in CogLM, 1 http://github.com/THUDM/icetk which will benefit many tasks relying on global information, e.g. super-resolution in the next section and image classification. The importance of bidirectional context has been verified in the comparison of BERT [2] and GPT on GLUE [31]. (GLM) The main advantage over GLM is simplicity. To unify the generation and bidirectional understanding, GLM needs to define many new special tokens and a new type of position embedding, insert a sentinel for each mask region and change the order of input tokens. It destroys the spatial relevance in the image data and excludes the possibility of using 2D local attention or convolution. (MAE) MAE is designed for self-supervised learning on pure image data and is not ready for generation. Even without text, CogLM is more parameter-efficient, because MAE is an encoder-decoder structure. A considerable part of parameters in encoders and decoders are learned for the same function, e.g. extracting basic features from inputs. 3.2 Pretraining As we have introduced CogLM as a general pretraining framework, in this section, we will describe the details and hyperparameters of our pretrained CogLM. Tokenization. We have developed a unified tokenizer icetk of Image, Chinese and English. As shown in DebertaV2 [13], a large vocabulary (128,000 tokens) offers many benefits. For text, we extract a bilingual vocabulary of 130,000 tokens in icetk and explicitly classify them as Chinese, English, Common or Rare Symbols, so that we can specify the generated language via a sampling mask. The image tokenizer is a 20,000-token first-stage VQ-VAE [29], largely following the tokenizer in CogView [3]. Inspired by Esser et al. [7], a term of perceptual loss [37] is added to the reconstruction loss, significantly improving reconstruction performance. (See Appendix for details.) Transformer. The backbone of our pretrained CogLM is a Transformer with Sandwich LayerNorm [3]. The model has 6 billion parameters (48 layers, hidden size 3072, 48 attention heads), trained for 300,000 iterations in FP16 with batch size 4,096. The sequence length is 512, consisting of 400 image tokens, 1 separator and up to 111 text tokens. Masking Strategy. We randomly select a sampling strategy for each training sample. For the mask prediction strategy, the analysis from SimMIM [34] exhibits the great importance of mask percentage and patch distribution. We follow their results to sample 4⇥ 4 token patches at random until 75% of the tokens are in the mask regions. For bilingual samples, we randomly choose one of the languages during training. 3.3 Hierarchical Generation Although the pretrained CogLM can generate images from text, the resolution is only 20⇥ 20 tokens (160⇥ 160 pixels). The short sequence is intentional, for fast generation. The versatility of CogLM allows us to fine-tune it into super-resolution models. The whole hierarchical pipeline makes up our CogView2 system. Direct super-resolution. In this step, we want a model to map a generated low-resolution image token sequence im0 2 [0, 20000)20⇥20 to a higher-resolution sequence im1 2 [0, 20000)60⇥60. We fine-tune the pretrained CogLM into an encoder-decoder architecture. The input of the encoder is the 20⇥ 20 sequence of generated image tokens, and the input of the decoder is just a 60⇥ 60 sequence of [MASK]. We do not follow the original transformer [30] to add a cross-attention layer, instead we make the tokens in the decoder attend both local tokens in decoder and encoder. This cross-resolution local attention is implemented via a customized CUDA kernel introduced in section 4.2. Both encoder and decoder are initialized using the pretrained CogLM. In practice, we find it enough to only fine-tune the weights of the attention layers in the decoder, so that we can fix and share the other parameters between the encoder and decoder to reduce the memory consumption. Although direct mapping is a traditional practice for super-resolution—e.g. SRCNN [4]—it is hardly qualified as generation; it focuses more on texture transformation. The loss function of direct mapping is token-based or pixel-based (MAE), meaning that it predicts or maximizes the marginal distribution p(im1i |im 0) for each token i instead of p(im1|im0). As we use cross-entropy loss and a multinomial sampling during generation, we get im1 = [im11, ..., im 1 60⇥60], im 1 i ⇠ p✓(im1i |im 0), im1i and im 1 j are independent if i 6= j. (3) Input text: A great church. Mask 75% Local window details. Direct Super- Resolution. (60*60 tokens) CogLM (20* 20 tokens) Iterative super-resolution. All the local windows generate simultaneously. Figure 4: Super-resolution modules. Low-resolution images are mapped into high-resolution images via the direct super-resolution module. In each snapshot during the iterative super-resolution, all tokens of the same color are generated at the same time. All the local windows work in parallel. Therefore, we need to refine im1 using another module. Iterative super-resolution. In this step, we aim to refine the initial high-resolution sequence im1 into a better one im2. The working principle of the refinement is to break the independence of the generated tokens, while keeping the parallelism. Thus, we propose a local parallel autoregressive (LoPAR) approach. The motivation of LoPAR is that the hierarchical process frees us from global dependence. As long as we maintain 25% – a ratio from MAE [12] – random tokens as context, it is sufficient to recover the global scene of the image. If the re-generated tokens are coherent locally with 25% kept tokens, global coherence is also guaranteed. We mask 75% of the tokens of im1 and assume that there is a local window size , p(im2i |im 1) = p(im2i |{im 1 j | dist(i, j) < and j is not masked.}), (4) p(im2i |im 1 , im2j ) = p(im 2 i |im 1) if dist(i, j) > , (5) so that local attention is sufficient and tokens from different local windows can be generated in parallel. To further increase the parallelism, we find the local inconsistency usually occurs when directly adjacent (vertically or horizontally) tokens are generated at the same time. We factorize the generation process into different iterations diagonally as in Figure 4 and below: p(im2|im1) = 2 1Y k=0 row(i)+col(i)=kY i p(im2i |im 1 , {im2j | row(j) + col(j) < k}) , (6) where row(i) = b i 160 c mod and col(i) = (i 1) mod are the indices of row and column in the local window. To implement the iterative super-resolution module, we fine-tune the pretrained CogLM for 20,000 iterations into a BERT-style masked prediction model on 60⇥60-token sequences with local attention. The mask ratio is sampled from {0.2, 0.4, 0.6, 0.8, 0.9} for each sample. During inference, we set the local window size to = 6 and compress the iterative process from 2 1 to 6 iterations by arranging the unmasked tokens and merging the first and final iterations2. 2Implemented by a manually designed 6⇥ 6 matrix. Details are included in our released codes. 4 Plug-in Improved Techniques for Transformers 4.1 Cluster Sampling In autoregressive generation, the sampling strategy over the predicted distribution of the tokens is crucial. Top-k or top-p (nucleus) sampling [14] are the most common strategies, but suffer from an incomplete truncation problem. The vocabulary of the image tokens is learned by VQVAE [29], where the embeddings of some tokens are very similar. To represent the frequent patterns at a finer granularity, we use a large vocabulary of 20,000 tokens, three times larger than that of the previous works [26, 3], further exacerbating the situation. For instance, there are about 42 tokens basically “white” in icetk, which show subtle differences only when connected to some other tokens. Although the sum of the probabilities of these “white” tokens might be large enough, most of them could be filtered by top-k sampling. Figure 5 illustrates the problem. To solve the incomplete sampling problem, we propose cluster sampling. We group the 20,000 tokens into 500 clusters via Kmeans [18] based on their vectors in VQVAE. During sampling, we first sample a cluster using top-k sampling based on the sum of probabilities of tokens in the clusters, and then sample in the cluster. All the tokens within a cluster are treated as a whole and will be filtered or kept together, alleviating the incomplete truncation problem. 4.2 Local Attention Locality is one of the most important properties of image data. Local operations, e.g. convolution, dominated the visual computing before ViTs [5]. Even attention in the ViTs mainly deals with the interactions between local tokens [24]. We find it possible to fine-tune the pretrained CogLM using local attention and textual attention, which is generally compatible with the global attention weights from pretraining. However, 2D local attention cannot be implemented efficiently using high-level framework, e.g. Pytorch [20]. We develop a customized CUDA kernel to support both 2D local attention, 2D autoregressive local attention and cross-resolution local attention. In the CUDA kernel implementation, we can save half of the computation in the matrix multiplication and do not need a causal attention mask for the autoregressive attention. In the super-resolution modules, we use local attention with the receptive field (RF) of 9⇥ 9. Figure 6 show the benchmark for a single-head attention with hidden size 64 on a A100 GPU. The advantage of our method will be more obvious in autoregressive scenarios, which is up to 40⇥ faster and consumes 1% memory than global attention on 4,096 sequences. 4.3 Upweighting Textual Attention Most text-image pairs are weakly relevant in the large training data of CogLM. Even the model perfectly fits the data, it should have a considerable probability to generate irrelevant images. To strengthen the relevance, we leverage the explainability of the attention operation. We add a constant c to the attention scores from any token to the text tokens: (The attention mask is omitted for simplicity) Attention(Q, K, V, A) = softmax( Q T Kp d +[ c ... c| {z } text part 0 ... 0| {z } image part ])V. (7) This technique costs ignorable time consumption but largely improves the textual relevance of the generated images. In practice, c < 3 will not influence the quality of the images. 5 Experiments 5.1 Dataset Our dataset for pretraining contains about 30 million text-image pairs, mostly overlapping with that of CogView [3]. We filter about 5 million text-image pairs from the CogView dataset with some keywords, e.g. “abstract” and “texture”, because they are mostly background images used for design. These images consist of repeating patterns and contribute little to text-to-image generation. We then replenish the dataset with 5 million tag-image pairs. About half the text is translated from English, and both Chinese and English text are kept to train our bilingual CogLM. Only the images whose resolution is at least 480⇥ 480 are used to train the super-resolution modules. 5.2 Machine Evaluation To compare with previous and concurrent works, we follow the most popular benchmark originated from DALL-E [26], Fréchet Inception Distances and Inception Scores evaluated on MS-COCO [17]. 30,000 captions from the validation set are sampled to evaluate the FID. Since each image in COCO has up to 5 different captions, we carefully select the sampled captions to describe different images. We generate 16 samples for each caption (translated into Chinese), and select the best one with the lowest caption perplexity (the Caption Score in [3]). Note that FID is not the perfect metric to evaluate CogView2 because (1) the advantage of CogView2 is to generate high-resolution images, but we need to resize the images back to 256⇥ 256 for meaningful comparison. (2) There are mistakes when translating English captions into Chinese. (3) Our training data contain many single-object images, which are quite different from those in the distribution of COCO (common objects in context). The results of machine evaluation are demonstrated in Table 1. We find that fine-tuning CogLM on the MS-COCO dataset will largely improve the FID. During our fine-tuning, FID diminishes from 24.0 (0 iteration)! 19.2 (2,500 iterations) ! 17.5 (7,500 iterations). However, we find that the quality (human evaluation) of generation deteriorates. Though the style is similar to COCO, the generation is not as accurate as for the non-fine-tuned version, which also corresponds to the scores in human evaluation in Figure 7. 5.3 Human Evaluation As the most persuasive metric, we conduct a large-scale human evaluation following the setting in CogView [3] (See Appendix for details). The experiments include a total of 4,600 groups of comparison on COCO captions between some public available text-to-image works, including DFGAN [28], LAFITE [39], CogView [3], CogView2 (including its finetuned version on COCO) and the recovered ground truth after VQVAE. Note that the VQVAE in CogView2 is much better than that in CogView, which makes the recovered ground truth a stronger upper bound. The results are demonstrated in Figure 7. An intriguing finding is that the finetuned CogView2, although with much better FID, performs worse than the original model. We guess that the model might fit the style of complex scenes in COCO, but the generated samples with isolated subjects could be preferred by the annotators. 5.4 Analysis of the Speed and FLOPs of LoPAR As discussed in § 1, our motivation is to increase the degree of parallelism for inference acceleration, even with more FLOPs. Autoregressive generation with cached hidden states have the same FLOPs with a teacher-forcing forward step, but is much slower (858ms vs 225.9s in CogView2 scale). For LoPAR, it is exactly N (N = 6 in our setting) times and FLOPs of forward steps. We compare the inference speed of super-resolution stage with different strategies in Table 2. 6 Discussion Autoregressive or Diffusion? Although GPTs achieved great success in text generation, diffusion models are becoming increasingly popular in image generation. Here we compare diffusion models with autoregressive models from the aspect of speed, the largest disadvantage of the autoregressive models discussed in the section 1. With the same architecture, diffusion models require more FLOPs but have a high degree of parallelism. They can also make a trade-off between the quality and time consumption by manually scheduling the stride of sampling. For example, Glide [19] samples 250 diffusion steps for evaluation, and 27 steps for interactive sampling, to reduce the latency to 15s. Autoregressive models must generate the image token-by-token, but our LoPAR can upsample the image with a high parallelism degree, so that (potentially) we can reduce the time cost by introducing more hierarchies to design models much faster than diffusion models. Comparison between DALL-E-2 and CogView2. DALL-E-2 [27] is a recently released concurrent work for text-to-image generation on 1024⇥ 1024 resolution. Its probabilistic model and architecture are quite different from those in CogView2. But both models share the same spirit – hierarchical generation. The difference is that DALL-E-2 adopts an additional third-level super-resolution and a generation prior, which help result in potential quality gain, but also lead to expensive resourceconsuming. CogView2 is able to synthesize similar scenes according to the limited demos of DALL-E-2, e.g. “lion teacher” (Figure 1) vs. “panda scientist” (DALL-E-2), considering CogView2 is trained using only 5% of the total data (650M text-image pairs) by DALL-E-2. For future, CogView2 can also adopt the third-level super-resolution and the prior, though it is engineering mostly. 7 Conclusion The breakthrough in the text-to-image domain is made by autoregressive models. However, the slow generation and high complexity hinder researchers attempts to improve the quality in this direction. In this paper, we put forward an approach based on hierarchical transformers to help autoregressive models remedy these disadvantages, and bridge the gap between text-to-image pretraining and recent visual representation learning methods. Broader Impact. The advancement of text-to-image generation, especially text-guided image editing, will ease the creative efforts of artists and designers, while also causing a risk of misinformation, leading to permanent damages to the reliability of web photos. However, it is possible to train a classifier to distinguish the real and CogView2-generated images according to the texture features. Acknowledgments and Disclosure of Funding We would like to thank Zhao Xue and Sha Yuan for the help on collecting the dataset, Hanxiao Qu for maintaining the machines, and Yue Cao and Chang Zhou for their useful discussion, Zhendong Zhang for releasing an initial version of CUDA local attention. Funding in direct support of this work: GPU hours donated by BAAI, NSFC for Distinguished Young Scholar (61825602).
1. What is the focus and contribution of the paper on text2image generation? 2. What are the strengths of the proposed approach, particularly in terms of efficiency and quality? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works? 4. Do you have any concerns about the quality and fidelity of the generated samples? 5. Are there any limitations to the proposed method that should be considered?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents CogView2, which aims for faster and better text2image generation based on its previous work CogView. The key idea is to adopt the spirit of coarse to fine and generate low-resolution tokens first(20x20) and then do a super-resolution to 60x60 tokens for high-resolution images. Strengths And Weaknesses Strengths: This paper tries to solve two critical issues of the current auto-regressive-based text2image model: quality and efficiency. This paper provides many visualization consequences in the paper. Weaknesses: This paper claims too many things but does not verify them clearly in experiments. a) The training and inference efficiency of text2image generation are not evaluated or compared with previous methods with wall clock time or FLOPS. 2) As introduced in the introduction section the CogLM is general for many tasks like infilling tasks, and image captioning, but these capabilities are not presented in the paper. The comparison with current works is not convincing. For example, CovViiew2 didn't show significant improvements over previous methods on FID-0. Previous SOTA methods like Make-A-Scene and DALL-E2 did not report the results of FID-1 to FID-8 for comparison. Besides, some latest works like latent space diffusion and VQ-Diffusion are missed in the table for comparison. The quality and fidelity of these generated samples presented in the paper are not that impressive. First, the generated image is still blurry. Second, we can observe clear unreasonable structures for the human hands or faces. Questions Please refer to the weaknesses part. Limitations Yes.
NIPS
Title CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers Abstract Development of transformer-based text-to-image models is impeded by its slow generation and complexity, for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel autoregressive generation. We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, a cross-modal general language model (CogLM), and finetune it for fast super-resolution. The new text-to-image system, CogView2, shows competitive generation performance to the concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images. A lion man is typing in the office. A beautiful girl is hugging a husky. A lion teacher wearing a suit is in front of a blackboard. A robot is riding under the blue and cloudy sky. Several youths are talking in a bar. A young woman is taking photos. A tiger with angel’s wings. A girl holding an oil-paper umbrella in a rainy lane. Earth in the Eye. A magnificent church. Sketch. Mount Fuji, cherry blossom and Akita dog. Oil painting. A pirate captain with a skull. Figure 1: Text-to-Image samples from CogView2, which supports both Chinese and English. The actual input text is in Chinese, translated into English here for better understanding. Codes and a demo website will be updated at https://github.com/THUDM/CogView2. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). N/A 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1 Introduction Recently, text-to-image generation has been greatly advanced by large-scale pretrained transformers, e.g. DALL-E [26] and CogView [3]. These models learn to generate image tokens in an autoregressive way. However, they also suffer from the following disadvantages: Slow generation. Generation of autoregressive models usually is much slower than generation of non-autoregressive models, e.g. GANs [10], with the same FLOPs. Instead of employing a large number of parameters, this shortcoming is mainly attributed to the nature of token-by-token generation used in the autoregressive models cannot exploit the parallel computing ability of GPUs, even after caching hidden states [25]. This is a significant limitation. Expensive high-resolution training. The current large-scale pretrained models are generally based on Transformers [30], where the attention operation has both time and space complexity of O(n2) for training sequences of length n. Within a limited budget, we face a trade-off between the number of parameters, representing the modeling power, and the resolution of the generated images. For this reason, most current text-to-image models choose a resolution of 32⇥ 32 tokens (usually 256⇥ 256 pixels) [3, 26, 11], which is far less dense than the resolution of the real photos. Unidirectionality. For images, autoregressive models, e.g. GPTs, usually generate tokens in rasterscan order. This order shows the best perplexity during the evaluation [7]. However, this order makes the models unaware of the tokens below or on the right side during generation, as a result text-guided infilling is not supported. Moreover, the unidirectionality leads to a gap between the pretrained text-to-image models and vision transformers (ViTs) [5] based on bidirectional masked prediction, e.g. MAE [12] and SimMIM [34]—limiting their application on traditional visual tasks, such as image classification and object detection. Present Work. To overcome these defects, we first propose a simple and versatile pretraining method, a Cross-Modal general Language Model (CogLM). Our CogLM masks various types of tokens in the sequence of text and image tokens, and learns to predict them autoregressively. Specifically, (1) if we mask all the image tokens, the task becomes the same as the original CogView [3] in performing a text-to-image generation task; (2) if we mask random patches of image tokens, it works similarly to MAE as an infilling task; (3) if we mask text tokens, the task becomes image captioning. The versatility of CogLM enables us to fine-tune a pretrained CogLM for different downstream tasks, and constructs a hierarchical model, CogView2.There are three steps in the hierarchical generation process as follows: 1. First, we generate a batch of low-resolution images (20⇥ 20 tokens in CogView2) using the pretrained CogLM, and then (optionally) filter out the bad samples based on the perplexity of CogLM image captioning, which is the post-selection method introduced in CogView [3]. 2. The generated images are mapped into 60⇥ 60-token images by a direct super-resolution module fine-tuned from the pretrained CogLM. We use local attention implemented by our customized CUDA kernel to reduce the training expense. The high-resolution images from this step usually have inconsistent textures and lack details. 3. These high-resolution images are refined via another iterative super-resolution module finetuned from the pretrained CogLM. Most tokens are re-masked and re-generated in a local parallel autoregressive (LoPAR) way, which is much faster than the original autoregressive generation. How does CogView2 conquer the three defects? First, during pretraining the masked patch prediction task trains CogLM to handle bidirectional context, making it easy to adapt to bidirectional tasks, such as direct and iterative super-resolution. Second, the hierarchical design allows us to care only about the local coherence at a high-resolution level. In this way, the local attention can be leveraged to reduce the training expense. Third, the local parallel autoregressive generation can reduce model run times from 3,600 to 6 (1/600 only), significantly accelerating the generation of high-resolution images. CogView2 is about 10⇥ faster than the CogView (with sliding-window super-resolution) for generating images of similar resolution and better quality. Image) is the separator token. Mask regions are sampled according to different strategies. Only the second-to-last tokens in the mask regions are predicted to compute the loss. (Right) The mask regions are only implemented by changing the attention mask matrix, without any modification on the input tokens. In the attention mask matrix, rows and columns of all the masked tokens (the 2,3,4,6,7,8 rows and columns) can be extracted together to form a low-triangle attention mask matrix. 2 Related Work Text-to-image generation for arbitrary inputs is a long-held dream for many cross-modal machinelearning researchers. Most early attempts to address this challenge were based on Generative Adversarial Nets [10]; these include AttnGAN [35], DM-GAN [40], DF-GAN [28], et al. Although they can perform vivid synthesis on domain-specific datasets, such as Caltech-UCSD Birds 200, general-domain datasets, such as MS COCO [17], present great challenges for these methods. DALLE [26], CogView [3] and similar works [33, 8] leverage VQ-VAE [29] to compress an image to a sequence of discrete tokens and pretrain large transformers for autoregressive generation, greatly improving results in the general domain. LAFITE [39] learns to invert the pretrained CLIP [23] embeddings in the shared space of text and image for text-free training. Recently, many researchers have turned to diffusion models, largely due to the slow generation defect of autoregressive models. One example is Glide [19]. Non-autoregressive generation (NAR) is recently a popular topic in natural language generation— see Mask-Predict [9] and GLAT [21], which explores parallel decoding methods for autoregressivelike models. Generation speed was not an issue in the era when GANs dominated the image generation, but constitutes a considerable challenge for current autoregressive text-to-image models. M6-UFC [38] first introduces NAR methods into the VQ-VAE framework, and similar ideas are adopted by VQ-diffusion [11] and MaskGIT [1]. A possible drawback of pure NAR methods is that tokens sampled at the meantime might lead to global inconsistency in later steps during the generation of complex scenes. Our method introduces a hierarchical design to combine the consistency merit of autoregressive models and the speed advantage of NAR methods. 3 Method 3.1 The Cross-Modal General Language Model While previous self-supervised pretext tasks often target at mask prediction in the computer vision [34, 12], our approach pursues a unification of autoregressive generation and bidirectional context-aware mask prediction. In NLP, the General Language Model (GLM) [6] suggests changing the direct mask prediction into blockwise autoregressive generation. However, directly applying it to images would result in redundancy. For instance, the sizes of the masked image patches are fixed, thus we do not need the capacity of filling blocks of indefinite length as in NLP. Moreover, GLM inserts a sentinel token for each mask region to predict its first token, which greatly increases the sequence length thus restricts the usage of 2D local attention. Based on the analysis above, we present a simpler and more general language model for both text and image data—Cross-modal general Language Model (CogLM). As shown in Figure 2, CogLM takes as input a concatenation of text and images tokenized by icetk 1 (See § 3.2), whose dictionary contains 20,000 image tokens and 130,000 text (both Chinese and English) tokens. Formally, let t = [t1, ..., tM ] be the text tokens and im = [im1, ..., imN2 ] be the image tokens, where M and N2 are the lengths of text and image tokens respectively. The crucial step in CogLM is to sample k mask regions R = {[l0, r0], ..., [lk, rk]} according to various strategies. In practice, the following two strategies are used: • (Text-to-Image GPT) The input sequence is x = [t [BOI] im ]. We mask all the image tokens, which is similar to the pretraining task of CogView [3]. • (A Combination of Mask Prediction and Image Captioning) The input sequence is x = [im0 ... imi ... imj ... imN2 [BOE/C] t ], where [BOE],[BOC] are separators meaning beginning-of-English and beginning-of-Chinese used for the corresponding language. we mask random patches and the text tokens. Ideally, the two tasks should be separated; but we combine them together for training efficiency. Instead of replacing the tokens in the mask regions as [MASK], we make no change in the input but build an attention mask A based on the mask regions. All tokens outside mask regions are seen as context and can be attended to by all other tokens. A token in mask regions can only be attended to by the tokens in mask regions and behind it. Specifically, A[i, j] = 8 < : 1, if 8 [lu, ru] 2 R, j /2 [lu, ru], 1, if j i and 9 u, v (indices), i 2 [lu, ru] 2 R, j 2 [lv, rv] 2 R, 0, else. (1) Figure 2 shows an example of the attention mask matrix of two mask regions. In the mask regions, the model learns to predict the next token. The loss function can be written as follows: L = 1P u ru lu X v rv 1X i=lv log p(xi+1|xi, xcontext), (2) where the xcontext denotes the tokens outside the mask regions. Infilling. Note that the first token in each mask region is not predicted during training. This feature seems to disable CogLM from image infilling or cloze filling in natural language, but this problem actually has a simple solution. During inference, we can move the last context token before each mask region into it, as illustrated in Figure 3. Although these moved tokens becomes blind spots for mask regions before them, they have few negative effects in practice. To further avoid this minor influence and fully maintain the context information, we deal with each mask region individually. For each region, we move only the last context token before this region, and keep all the known tokens outside the mask regions. Thus, we cannot use the cached hidden states from the previous region, slightly slowing down the multi-region infilling. See Appendix A for samples. Advantages over GPT [22], GLM [6] and MAE [12]. (GPT) The main advantage over GPT is that the modeling of bidirectional contexts is considered in CogLM, 1 http://github.com/THUDM/icetk which will benefit many tasks relying on global information, e.g. super-resolution in the next section and image classification. The importance of bidirectional context has been verified in the comparison of BERT [2] and GPT on GLUE [31]. (GLM) The main advantage over GLM is simplicity. To unify the generation and bidirectional understanding, GLM needs to define many new special tokens and a new type of position embedding, insert a sentinel for each mask region and change the order of input tokens. It destroys the spatial relevance in the image data and excludes the possibility of using 2D local attention or convolution. (MAE) MAE is designed for self-supervised learning on pure image data and is not ready for generation. Even without text, CogLM is more parameter-efficient, because MAE is an encoder-decoder structure. A considerable part of parameters in encoders and decoders are learned for the same function, e.g. extracting basic features from inputs. 3.2 Pretraining As we have introduced CogLM as a general pretraining framework, in this section, we will describe the details and hyperparameters of our pretrained CogLM. Tokenization. We have developed a unified tokenizer icetk of Image, Chinese and English. As shown in DebertaV2 [13], a large vocabulary (128,000 tokens) offers many benefits. For text, we extract a bilingual vocabulary of 130,000 tokens in icetk and explicitly classify them as Chinese, English, Common or Rare Symbols, so that we can specify the generated language via a sampling mask. The image tokenizer is a 20,000-token first-stage VQ-VAE [29], largely following the tokenizer in CogView [3]. Inspired by Esser et al. [7], a term of perceptual loss [37] is added to the reconstruction loss, significantly improving reconstruction performance. (See Appendix for details.) Transformer. The backbone of our pretrained CogLM is a Transformer with Sandwich LayerNorm [3]. The model has 6 billion parameters (48 layers, hidden size 3072, 48 attention heads), trained for 300,000 iterations in FP16 with batch size 4,096. The sequence length is 512, consisting of 400 image tokens, 1 separator and up to 111 text tokens. Masking Strategy. We randomly select a sampling strategy for each training sample. For the mask prediction strategy, the analysis from SimMIM [34] exhibits the great importance of mask percentage and patch distribution. We follow their results to sample 4⇥ 4 token patches at random until 75% of the tokens are in the mask regions. For bilingual samples, we randomly choose one of the languages during training. 3.3 Hierarchical Generation Although the pretrained CogLM can generate images from text, the resolution is only 20⇥ 20 tokens (160⇥ 160 pixels). The short sequence is intentional, for fast generation. The versatility of CogLM allows us to fine-tune it into super-resolution models. The whole hierarchical pipeline makes up our CogView2 system. Direct super-resolution. In this step, we want a model to map a generated low-resolution image token sequence im0 2 [0, 20000)20⇥20 to a higher-resolution sequence im1 2 [0, 20000)60⇥60. We fine-tune the pretrained CogLM into an encoder-decoder architecture. The input of the encoder is the 20⇥ 20 sequence of generated image tokens, and the input of the decoder is just a 60⇥ 60 sequence of [MASK]. We do not follow the original transformer [30] to add a cross-attention layer, instead we make the tokens in the decoder attend both local tokens in decoder and encoder. This cross-resolution local attention is implemented via a customized CUDA kernel introduced in section 4.2. Both encoder and decoder are initialized using the pretrained CogLM. In practice, we find it enough to only fine-tune the weights of the attention layers in the decoder, so that we can fix and share the other parameters between the encoder and decoder to reduce the memory consumption. Although direct mapping is a traditional practice for super-resolution—e.g. SRCNN [4]—it is hardly qualified as generation; it focuses more on texture transformation. The loss function of direct mapping is token-based or pixel-based (MAE), meaning that it predicts or maximizes the marginal distribution p(im1i |im 0) for each token i instead of p(im1|im0). As we use cross-entropy loss and a multinomial sampling during generation, we get im1 = [im11, ..., im 1 60⇥60], im 1 i ⇠ p✓(im1i |im 0), im1i and im 1 j are independent if i 6= j. (3) Input text: A great church. Mask 75% Local window details. Direct Super- Resolution. (60*60 tokens) CogLM (20* 20 tokens) Iterative super-resolution. All the local windows generate simultaneously. Figure 4: Super-resolution modules. Low-resolution images are mapped into high-resolution images via the direct super-resolution module. In each snapshot during the iterative super-resolution, all tokens of the same color are generated at the same time. All the local windows work in parallel. Therefore, we need to refine im1 using another module. Iterative super-resolution. In this step, we aim to refine the initial high-resolution sequence im1 into a better one im2. The working principle of the refinement is to break the independence of the generated tokens, while keeping the parallelism. Thus, we propose a local parallel autoregressive (LoPAR) approach. The motivation of LoPAR is that the hierarchical process frees us from global dependence. As long as we maintain 25% – a ratio from MAE [12] – random tokens as context, it is sufficient to recover the global scene of the image. If the re-generated tokens are coherent locally with 25% kept tokens, global coherence is also guaranteed. We mask 75% of the tokens of im1 and assume that there is a local window size , p(im2i |im 1) = p(im2i |{im 1 j | dist(i, j) < and j is not masked.}), (4) p(im2i |im 1 , im2j ) = p(im 2 i |im 1) if dist(i, j) > , (5) so that local attention is sufficient and tokens from different local windows can be generated in parallel. To further increase the parallelism, we find the local inconsistency usually occurs when directly adjacent (vertically or horizontally) tokens are generated at the same time. We factorize the generation process into different iterations diagonally as in Figure 4 and below: p(im2|im1) = 2 1Y k=0 row(i)+col(i)=kY i p(im2i |im 1 , {im2j | row(j) + col(j) < k}) , (6) where row(i) = b i 160 c mod and col(i) = (i 1) mod are the indices of row and column in the local window. To implement the iterative super-resolution module, we fine-tune the pretrained CogLM for 20,000 iterations into a BERT-style masked prediction model on 60⇥60-token sequences with local attention. The mask ratio is sampled from {0.2, 0.4, 0.6, 0.8, 0.9} for each sample. During inference, we set the local window size to = 6 and compress the iterative process from 2 1 to 6 iterations by arranging the unmasked tokens and merging the first and final iterations2. 2Implemented by a manually designed 6⇥ 6 matrix. Details are included in our released codes. 4 Plug-in Improved Techniques for Transformers 4.1 Cluster Sampling In autoregressive generation, the sampling strategy over the predicted distribution of the tokens is crucial. Top-k or top-p (nucleus) sampling [14] are the most common strategies, but suffer from an incomplete truncation problem. The vocabulary of the image tokens is learned by VQVAE [29], where the embeddings of some tokens are very similar. To represent the frequent patterns at a finer granularity, we use a large vocabulary of 20,000 tokens, three times larger than that of the previous works [26, 3], further exacerbating the situation. For instance, there are about 42 tokens basically “white” in icetk, which show subtle differences only when connected to some other tokens. Although the sum of the probabilities of these “white” tokens might be large enough, most of them could be filtered by top-k sampling. Figure 5 illustrates the problem. To solve the incomplete sampling problem, we propose cluster sampling. We group the 20,000 tokens into 500 clusters via Kmeans [18] based on their vectors in VQVAE. During sampling, we first sample a cluster using top-k sampling based on the sum of probabilities of tokens in the clusters, and then sample in the cluster. All the tokens within a cluster are treated as a whole and will be filtered or kept together, alleviating the incomplete truncation problem. 4.2 Local Attention Locality is one of the most important properties of image data. Local operations, e.g. convolution, dominated the visual computing before ViTs [5]. Even attention in the ViTs mainly deals with the interactions between local tokens [24]. We find it possible to fine-tune the pretrained CogLM using local attention and textual attention, which is generally compatible with the global attention weights from pretraining. However, 2D local attention cannot be implemented efficiently using high-level framework, e.g. Pytorch [20]. We develop a customized CUDA kernel to support both 2D local attention, 2D autoregressive local attention and cross-resolution local attention. In the CUDA kernel implementation, we can save half of the computation in the matrix multiplication and do not need a causal attention mask for the autoregressive attention. In the super-resolution modules, we use local attention with the receptive field (RF) of 9⇥ 9. Figure 6 show the benchmark for a single-head attention with hidden size 64 on a A100 GPU. The advantage of our method will be more obvious in autoregressive scenarios, which is up to 40⇥ faster and consumes 1% memory than global attention on 4,096 sequences. 4.3 Upweighting Textual Attention Most text-image pairs are weakly relevant in the large training data of CogLM. Even the model perfectly fits the data, it should have a considerable probability to generate irrelevant images. To strengthen the relevance, we leverage the explainability of the attention operation. We add a constant c to the attention scores from any token to the text tokens: (The attention mask is omitted for simplicity) Attention(Q, K, V, A) = softmax( Q T Kp d +[ c ... c| {z } text part 0 ... 0| {z } image part ])V. (7) This technique costs ignorable time consumption but largely improves the textual relevance of the generated images. In practice, c < 3 will not influence the quality of the images. 5 Experiments 5.1 Dataset Our dataset for pretraining contains about 30 million text-image pairs, mostly overlapping with that of CogView [3]. We filter about 5 million text-image pairs from the CogView dataset with some keywords, e.g. “abstract” and “texture”, because they are mostly background images used for design. These images consist of repeating patterns and contribute little to text-to-image generation. We then replenish the dataset with 5 million tag-image pairs. About half the text is translated from English, and both Chinese and English text are kept to train our bilingual CogLM. Only the images whose resolution is at least 480⇥ 480 are used to train the super-resolution modules. 5.2 Machine Evaluation To compare with previous and concurrent works, we follow the most popular benchmark originated from DALL-E [26], Fréchet Inception Distances and Inception Scores evaluated on MS-COCO [17]. 30,000 captions from the validation set are sampled to evaluate the FID. Since each image in COCO has up to 5 different captions, we carefully select the sampled captions to describe different images. We generate 16 samples for each caption (translated into Chinese), and select the best one with the lowest caption perplexity (the Caption Score in [3]). Note that FID is not the perfect metric to evaluate CogView2 because (1) the advantage of CogView2 is to generate high-resolution images, but we need to resize the images back to 256⇥ 256 for meaningful comparison. (2) There are mistakes when translating English captions into Chinese. (3) Our training data contain many single-object images, which are quite different from those in the distribution of COCO (common objects in context). The results of machine evaluation are demonstrated in Table 1. We find that fine-tuning CogLM on the MS-COCO dataset will largely improve the FID. During our fine-tuning, FID diminishes from 24.0 (0 iteration)! 19.2 (2,500 iterations) ! 17.5 (7,500 iterations). However, we find that the quality (human evaluation) of generation deteriorates. Though the style is similar to COCO, the generation is not as accurate as for the non-fine-tuned version, which also corresponds to the scores in human evaluation in Figure 7. 5.3 Human Evaluation As the most persuasive metric, we conduct a large-scale human evaluation following the setting in CogView [3] (See Appendix for details). The experiments include a total of 4,600 groups of comparison on COCO captions between some public available text-to-image works, including DFGAN [28], LAFITE [39], CogView [3], CogView2 (including its finetuned version on COCO) and the recovered ground truth after VQVAE. Note that the VQVAE in CogView2 is much better than that in CogView, which makes the recovered ground truth a stronger upper bound. The results are demonstrated in Figure 7. An intriguing finding is that the finetuned CogView2, although with much better FID, performs worse than the original model. We guess that the model might fit the style of complex scenes in COCO, but the generated samples with isolated subjects could be preferred by the annotators. 5.4 Analysis of the Speed and FLOPs of LoPAR As discussed in § 1, our motivation is to increase the degree of parallelism for inference acceleration, even with more FLOPs. Autoregressive generation with cached hidden states have the same FLOPs with a teacher-forcing forward step, but is much slower (858ms vs 225.9s in CogView2 scale). For LoPAR, it is exactly N (N = 6 in our setting) times and FLOPs of forward steps. We compare the inference speed of super-resolution stage with different strategies in Table 2. 6 Discussion Autoregressive or Diffusion? Although GPTs achieved great success in text generation, diffusion models are becoming increasingly popular in image generation. Here we compare diffusion models with autoregressive models from the aspect of speed, the largest disadvantage of the autoregressive models discussed in the section 1. With the same architecture, diffusion models require more FLOPs but have a high degree of parallelism. They can also make a trade-off between the quality and time consumption by manually scheduling the stride of sampling. For example, Glide [19] samples 250 diffusion steps for evaluation, and 27 steps for interactive sampling, to reduce the latency to 15s. Autoregressive models must generate the image token-by-token, but our LoPAR can upsample the image with a high parallelism degree, so that (potentially) we can reduce the time cost by introducing more hierarchies to design models much faster than diffusion models. Comparison between DALL-E-2 and CogView2. DALL-E-2 [27] is a recently released concurrent work for text-to-image generation on 1024⇥ 1024 resolution. Its probabilistic model and architecture are quite different from those in CogView2. But both models share the same spirit – hierarchical generation. The difference is that DALL-E-2 adopts an additional third-level super-resolution and a generation prior, which help result in potential quality gain, but also lead to expensive resourceconsuming. CogView2 is able to synthesize similar scenes according to the limited demos of DALL-E-2, e.g. “lion teacher” (Figure 1) vs. “panda scientist” (DALL-E-2), considering CogView2 is trained using only 5% of the total data (650M text-image pairs) by DALL-E-2. For future, CogView2 can also adopt the third-level super-resolution and the prior, though it is engineering mostly. 7 Conclusion The breakthrough in the text-to-image domain is made by autoregressive models. However, the slow generation and high complexity hinder researchers attempts to improve the quality in this direction. In this paper, we put forward an approach based on hierarchical transformers to help autoregressive models remedy these disadvantages, and bridge the gap between text-to-image pretraining and recent visual representation learning methods. Broader Impact. The advancement of text-to-image generation, especially text-guided image editing, will ease the creative efforts of artists and designers, while also causing a risk of misinformation, leading to permanent damages to the reliability of web photos. However, it is possible to train a classifier to distinguish the real and CogView2-generated images according to the texture features. Acknowledgments and Disclosure of Funding We would like to thank Zhao Xue and Sha Yuan for the help on collecting the dataset, Hanxiao Qu for maintaining the machines, and Yue Cao and Chang Zhou for their useful discussion, Zhendong Zhang for releasing an initial version of CUDA local attention. Funding in direct support of this work: GPU hours donated by BAAI, NSFC for Distinguished Young Scholar (61825602).
1. What is the focus and contribution of the paper on text-to-image generation? 2. What are the strengths of the proposed approach, particularly in terms of efficiency and hierarchical transformer? 3. What are the weaknesses of the paper, especially regarding the limitations of the proposed method? 4. Do you have any questions regarding the local parallel auto-regressive generation? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Auto-regressive transformer-based text-to-image model can generate elegant pictures, but also face low-generation speed problems for high-resolution image generation. This is due to the extremely long sequence length and the auto-regressive decoding schema. In this work, the authors propose Cogview2, which is a hierarchical transformer and adopts local parallel auto-regressive generation instead of the global AR. The proposed Cogview2 achieves comparable or even better results than Cogview and speeds up the inference a lot. Overall, Cogview2 is an efficient version of Cogview but more fast benefiting from the local parallel decoding. Strengths And Weaknesses Strength The proposed Cogview2 consists of several modules. First, they use the CogLM to generate a preliminary image with 20 × 20 tokens. The following super-resolution is based on this initial image, and I think a useful finding here is that we do not need to predict an image with a high resolution from the scratch, since a small 20 × 20 tokens may involve the information given the text. This may motivate the following researcher to design more efficient text-to-image models. Local parallel autoregressive decoding is interesting. Traditional AR for text-to-image generation flatten the image into a 1-D sequence and generate the visual tokens from left to right. It is quite time-consuming and also ignores the spatial correlations. Thanks to the flexibility of pretrained general language model, Cogview2 can perform image completion naturally Achieve comparable results with auto-regressive modeling while speeding up the inference quite a lot. Weakness There is no major concerns for me. Maybe, I would like to see a totally non-autoregressive model which can deliver comparable results with AR ones. Questions Have you tried other local parallel generation manners? e.g. within a local patch, left to right or top to bottom generation in a non-autoregressive fashion? The attention mask shown in Figure2 is somehow difficult to understand. Do you mean the token in green box is masked and going to be generated? This slightly confuses me. What about the speed of iterative super-resolution compared with stacked image super-resolution models in Imagen? Limitations There is no limitation discussed in the current version. The authors have addressed the potential negative societal impact.
NIPS
Title CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers Abstract Development of transformer-based text-to-image models is impeded by its slow generation and complexity, for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel autoregressive generation. We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, a cross-modal general language model (CogLM), and finetune it for fast super-resolution. The new text-to-image system, CogView2, shows competitive generation performance to the concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images. A lion man is typing in the office. A beautiful girl is hugging a husky. A lion teacher wearing a suit is in front of a blackboard. A robot is riding under the blue and cloudy sky. Several youths are talking in a bar. A young woman is taking photos. A tiger with angel’s wings. A girl holding an oil-paper umbrella in a rainy lane. Earth in the Eye. A magnificent church. Sketch. Mount Fuji, cherry blossom and Akita dog. Oil painting. A pirate captain with a skull. Figure 1: Text-to-Image samples from CogView2, which supports both Chinese and English. The actual input text is in Chinese, translated into English here for better understanding. Codes and a demo website will be updated at https://github.com/THUDM/CogView2. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). N/A 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1 Introduction Recently, text-to-image generation has been greatly advanced by large-scale pretrained transformers, e.g. DALL-E [26] and CogView [3]. These models learn to generate image tokens in an autoregressive way. However, they also suffer from the following disadvantages: Slow generation. Generation of autoregressive models usually is much slower than generation of non-autoregressive models, e.g. GANs [10], with the same FLOPs. Instead of employing a large number of parameters, this shortcoming is mainly attributed to the nature of token-by-token generation used in the autoregressive models cannot exploit the parallel computing ability of GPUs, even after caching hidden states [25]. This is a significant limitation. Expensive high-resolution training. The current large-scale pretrained models are generally based on Transformers [30], where the attention operation has both time and space complexity of O(n2) for training sequences of length n. Within a limited budget, we face a trade-off between the number of parameters, representing the modeling power, and the resolution of the generated images. For this reason, most current text-to-image models choose a resolution of 32⇥ 32 tokens (usually 256⇥ 256 pixels) [3, 26, 11], which is far less dense than the resolution of the real photos. Unidirectionality. For images, autoregressive models, e.g. GPTs, usually generate tokens in rasterscan order. This order shows the best perplexity during the evaluation [7]. However, this order makes the models unaware of the tokens below or on the right side during generation, as a result text-guided infilling is not supported. Moreover, the unidirectionality leads to a gap between the pretrained text-to-image models and vision transformers (ViTs) [5] based on bidirectional masked prediction, e.g. MAE [12] and SimMIM [34]—limiting their application on traditional visual tasks, such as image classification and object detection. Present Work. To overcome these defects, we first propose a simple and versatile pretraining method, a Cross-Modal general Language Model (CogLM). Our CogLM masks various types of tokens in the sequence of text and image tokens, and learns to predict them autoregressively. Specifically, (1) if we mask all the image tokens, the task becomes the same as the original CogView [3] in performing a text-to-image generation task; (2) if we mask random patches of image tokens, it works similarly to MAE as an infilling task; (3) if we mask text tokens, the task becomes image captioning. The versatility of CogLM enables us to fine-tune a pretrained CogLM for different downstream tasks, and constructs a hierarchical model, CogView2.There are three steps in the hierarchical generation process as follows: 1. First, we generate a batch of low-resolution images (20⇥ 20 tokens in CogView2) using the pretrained CogLM, and then (optionally) filter out the bad samples based on the perplexity of CogLM image captioning, which is the post-selection method introduced in CogView [3]. 2. The generated images are mapped into 60⇥ 60-token images by a direct super-resolution module fine-tuned from the pretrained CogLM. We use local attention implemented by our customized CUDA kernel to reduce the training expense. The high-resolution images from this step usually have inconsistent textures and lack details. 3. These high-resolution images are refined via another iterative super-resolution module finetuned from the pretrained CogLM. Most tokens are re-masked and re-generated in a local parallel autoregressive (LoPAR) way, which is much faster than the original autoregressive generation. How does CogView2 conquer the three defects? First, during pretraining the masked patch prediction task trains CogLM to handle bidirectional context, making it easy to adapt to bidirectional tasks, such as direct and iterative super-resolution. Second, the hierarchical design allows us to care only about the local coherence at a high-resolution level. In this way, the local attention can be leveraged to reduce the training expense. Third, the local parallel autoregressive generation can reduce model run times from 3,600 to 6 (1/600 only), significantly accelerating the generation of high-resolution images. CogView2 is about 10⇥ faster than the CogView (with sliding-window super-resolution) for generating images of similar resolution and better quality. Image) is the separator token. Mask regions are sampled according to different strategies. Only the second-to-last tokens in the mask regions are predicted to compute the loss. (Right) The mask regions are only implemented by changing the attention mask matrix, without any modification on the input tokens. In the attention mask matrix, rows and columns of all the masked tokens (the 2,3,4,6,7,8 rows and columns) can be extracted together to form a low-triangle attention mask matrix. 2 Related Work Text-to-image generation for arbitrary inputs is a long-held dream for many cross-modal machinelearning researchers. Most early attempts to address this challenge were based on Generative Adversarial Nets [10]; these include AttnGAN [35], DM-GAN [40], DF-GAN [28], et al. Although they can perform vivid synthesis on domain-specific datasets, such as Caltech-UCSD Birds 200, general-domain datasets, such as MS COCO [17], present great challenges for these methods. DALLE [26], CogView [3] and similar works [33, 8] leverage VQ-VAE [29] to compress an image to a sequence of discrete tokens and pretrain large transformers for autoregressive generation, greatly improving results in the general domain. LAFITE [39] learns to invert the pretrained CLIP [23] embeddings in the shared space of text and image for text-free training. Recently, many researchers have turned to diffusion models, largely due to the slow generation defect of autoregressive models. One example is Glide [19]. Non-autoregressive generation (NAR) is recently a popular topic in natural language generation— see Mask-Predict [9] and GLAT [21], which explores parallel decoding methods for autoregressivelike models. Generation speed was not an issue in the era when GANs dominated the image generation, but constitutes a considerable challenge for current autoregressive text-to-image models. M6-UFC [38] first introduces NAR methods into the VQ-VAE framework, and similar ideas are adopted by VQ-diffusion [11] and MaskGIT [1]. A possible drawback of pure NAR methods is that tokens sampled at the meantime might lead to global inconsistency in later steps during the generation of complex scenes. Our method introduces a hierarchical design to combine the consistency merit of autoregressive models and the speed advantage of NAR methods. 3 Method 3.1 The Cross-Modal General Language Model While previous self-supervised pretext tasks often target at mask prediction in the computer vision [34, 12], our approach pursues a unification of autoregressive generation and bidirectional context-aware mask prediction. In NLP, the General Language Model (GLM) [6] suggests changing the direct mask prediction into blockwise autoregressive generation. However, directly applying it to images would result in redundancy. For instance, the sizes of the masked image patches are fixed, thus we do not need the capacity of filling blocks of indefinite length as in NLP. Moreover, GLM inserts a sentinel token for each mask region to predict its first token, which greatly increases the sequence length thus restricts the usage of 2D local attention. Based on the analysis above, we present a simpler and more general language model for both text and image data—Cross-modal general Language Model (CogLM). As shown in Figure 2, CogLM takes as input a concatenation of text and images tokenized by icetk 1 (See § 3.2), whose dictionary contains 20,000 image tokens and 130,000 text (both Chinese and English) tokens. Formally, let t = [t1, ..., tM ] be the text tokens and im = [im1, ..., imN2 ] be the image tokens, where M and N2 are the lengths of text and image tokens respectively. The crucial step in CogLM is to sample k mask regions R = {[l0, r0], ..., [lk, rk]} according to various strategies. In practice, the following two strategies are used: • (Text-to-Image GPT) The input sequence is x = [t [BOI] im ]. We mask all the image tokens, which is similar to the pretraining task of CogView [3]. • (A Combination of Mask Prediction and Image Captioning) The input sequence is x = [im0 ... imi ... imj ... imN2 [BOE/C] t ], where [BOE],[BOC] are separators meaning beginning-of-English and beginning-of-Chinese used for the corresponding language. we mask random patches and the text tokens. Ideally, the two tasks should be separated; but we combine them together for training efficiency. Instead of replacing the tokens in the mask regions as [MASK], we make no change in the input but build an attention mask A based on the mask regions. All tokens outside mask regions are seen as context and can be attended to by all other tokens. A token in mask regions can only be attended to by the tokens in mask regions and behind it. Specifically, A[i, j] = 8 < : 1, if 8 [lu, ru] 2 R, j /2 [lu, ru], 1, if j i and 9 u, v (indices), i 2 [lu, ru] 2 R, j 2 [lv, rv] 2 R, 0, else. (1) Figure 2 shows an example of the attention mask matrix of two mask regions. In the mask regions, the model learns to predict the next token. The loss function can be written as follows: L = 1P u ru lu X v rv 1X i=lv log p(xi+1|xi, xcontext), (2) where the xcontext denotes the tokens outside the mask regions. Infilling. Note that the first token in each mask region is not predicted during training. This feature seems to disable CogLM from image infilling or cloze filling in natural language, but this problem actually has a simple solution. During inference, we can move the last context token before each mask region into it, as illustrated in Figure 3. Although these moved tokens becomes blind spots for mask regions before them, they have few negative effects in practice. To further avoid this minor influence and fully maintain the context information, we deal with each mask region individually. For each region, we move only the last context token before this region, and keep all the known tokens outside the mask regions. Thus, we cannot use the cached hidden states from the previous region, slightly slowing down the multi-region infilling. See Appendix A for samples. Advantages over GPT [22], GLM [6] and MAE [12]. (GPT) The main advantage over GPT is that the modeling of bidirectional contexts is considered in CogLM, 1 http://github.com/THUDM/icetk which will benefit many tasks relying on global information, e.g. super-resolution in the next section and image classification. The importance of bidirectional context has been verified in the comparison of BERT [2] and GPT on GLUE [31]. (GLM) The main advantage over GLM is simplicity. To unify the generation and bidirectional understanding, GLM needs to define many new special tokens and a new type of position embedding, insert a sentinel for each mask region and change the order of input tokens. It destroys the spatial relevance in the image data and excludes the possibility of using 2D local attention or convolution. (MAE) MAE is designed for self-supervised learning on pure image data and is not ready for generation. Even without text, CogLM is more parameter-efficient, because MAE is an encoder-decoder structure. A considerable part of parameters in encoders and decoders are learned for the same function, e.g. extracting basic features from inputs. 3.2 Pretraining As we have introduced CogLM as a general pretraining framework, in this section, we will describe the details and hyperparameters of our pretrained CogLM. Tokenization. We have developed a unified tokenizer icetk of Image, Chinese and English. As shown in DebertaV2 [13], a large vocabulary (128,000 tokens) offers many benefits. For text, we extract a bilingual vocabulary of 130,000 tokens in icetk and explicitly classify them as Chinese, English, Common or Rare Symbols, so that we can specify the generated language via a sampling mask. The image tokenizer is a 20,000-token first-stage VQ-VAE [29], largely following the tokenizer in CogView [3]. Inspired by Esser et al. [7], a term of perceptual loss [37] is added to the reconstruction loss, significantly improving reconstruction performance. (See Appendix for details.) Transformer. The backbone of our pretrained CogLM is a Transformer with Sandwich LayerNorm [3]. The model has 6 billion parameters (48 layers, hidden size 3072, 48 attention heads), trained for 300,000 iterations in FP16 with batch size 4,096. The sequence length is 512, consisting of 400 image tokens, 1 separator and up to 111 text tokens. Masking Strategy. We randomly select a sampling strategy for each training sample. For the mask prediction strategy, the analysis from SimMIM [34] exhibits the great importance of mask percentage and patch distribution. We follow their results to sample 4⇥ 4 token patches at random until 75% of the tokens are in the mask regions. For bilingual samples, we randomly choose one of the languages during training. 3.3 Hierarchical Generation Although the pretrained CogLM can generate images from text, the resolution is only 20⇥ 20 tokens (160⇥ 160 pixels). The short sequence is intentional, for fast generation. The versatility of CogLM allows us to fine-tune it into super-resolution models. The whole hierarchical pipeline makes up our CogView2 system. Direct super-resolution. In this step, we want a model to map a generated low-resolution image token sequence im0 2 [0, 20000)20⇥20 to a higher-resolution sequence im1 2 [0, 20000)60⇥60. We fine-tune the pretrained CogLM into an encoder-decoder architecture. The input of the encoder is the 20⇥ 20 sequence of generated image tokens, and the input of the decoder is just a 60⇥ 60 sequence of [MASK]. We do not follow the original transformer [30] to add a cross-attention layer, instead we make the tokens in the decoder attend both local tokens in decoder and encoder. This cross-resolution local attention is implemented via a customized CUDA kernel introduced in section 4.2. Both encoder and decoder are initialized using the pretrained CogLM. In practice, we find it enough to only fine-tune the weights of the attention layers in the decoder, so that we can fix and share the other parameters between the encoder and decoder to reduce the memory consumption. Although direct mapping is a traditional practice for super-resolution—e.g. SRCNN [4]—it is hardly qualified as generation; it focuses more on texture transformation. The loss function of direct mapping is token-based or pixel-based (MAE), meaning that it predicts or maximizes the marginal distribution p(im1i |im 0) for each token i instead of p(im1|im0). As we use cross-entropy loss and a multinomial sampling during generation, we get im1 = [im11, ..., im 1 60⇥60], im 1 i ⇠ p✓(im1i |im 0), im1i and im 1 j are independent if i 6= j. (3) Input text: A great church. Mask 75% Local window details. Direct Super- Resolution. (60*60 tokens) CogLM (20* 20 tokens) Iterative super-resolution. All the local windows generate simultaneously. Figure 4: Super-resolution modules. Low-resolution images are mapped into high-resolution images via the direct super-resolution module. In each snapshot during the iterative super-resolution, all tokens of the same color are generated at the same time. All the local windows work in parallel. Therefore, we need to refine im1 using another module. Iterative super-resolution. In this step, we aim to refine the initial high-resolution sequence im1 into a better one im2. The working principle of the refinement is to break the independence of the generated tokens, while keeping the parallelism. Thus, we propose a local parallel autoregressive (LoPAR) approach. The motivation of LoPAR is that the hierarchical process frees us from global dependence. As long as we maintain 25% – a ratio from MAE [12] – random tokens as context, it is sufficient to recover the global scene of the image. If the re-generated tokens are coherent locally with 25% kept tokens, global coherence is also guaranteed. We mask 75% of the tokens of im1 and assume that there is a local window size , p(im2i |im 1) = p(im2i |{im 1 j | dist(i, j) < and j is not masked.}), (4) p(im2i |im 1 , im2j ) = p(im 2 i |im 1) if dist(i, j) > , (5) so that local attention is sufficient and tokens from different local windows can be generated in parallel. To further increase the parallelism, we find the local inconsistency usually occurs when directly adjacent (vertically or horizontally) tokens are generated at the same time. We factorize the generation process into different iterations diagonally as in Figure 4 and below: p(im2|im1) = 2 1Y k=0 row(i)+col(i)=kY i p(im2i |im 1 , {im2j | row(j) + col(j) < k}) , (6) where row(i) = b i 160 c mod and col(i) = (i 1) mod are the indices of row and column in the local window. To implement the iterative super-resolution module, we fine-tune the pretrained CogLM for 20,000 iterations into a BERT-style masked prediction model on 60⇥60-token sequences with local attention. The mask ratio is sampled from {0.2, 0.4, 0.6, 0.8, 0.9} for each sample. During inference, we set the local window size to = 6 and compress the iterative process from 2 1 to 6 iterations by arranging the unmasked tokens and merging the first and final iterations2. 2Implemented by a manually designed 6⇥ 6 matrix. Details are included in our released codes. 4 Plug-in Improved Techniques for Transformers 4.1 Cluster Sampling In autoregressive generation, the sampling strategy over the predicted distribution of the tokens is crucial. Top-k or top-p (nucleus) sampling [14] are the most common strategies, but suffer from an incomplete truncation problem. The vocabulary of the image tokens is learned by VQVAE [29], where the embeddings of some tokens are very similar. To represent the frequent patterns at a finer granularity, we use a large vocabulary of 20,000 tokens, three times larger than that of the previous works [26, 3], further exacerbating the situation. For instance, there are about 42 tokens basically “white” in icetk, which show subtle differences only when connected to some other tokens. Although the sum of the probabilities of these “white” tokens might be large enough, most of them could be filtered by top-k sampling. Figure 5 illustrates the problem. To solve the incomplete sampling problem, we propose cluster sampling. We group the 20,000 tokens into 500 clusters via Kmeans [18] based on their vectors in VQVAE. During sampling, we first sample a cluster using top-k sampling based on the sum of probabilities of tokens in the clusters, and then sample in the cluster. All the tokens within a cluster are treated as a whole and will be filtered or kept together, alleviating the incomplete truncation problem. 4.2 Local Attention Locality is one of the most important properties of image data. Local operations, e.g. convolution, dominated the visual computing before ViTs [5]. Even attention in the ViTs mainly deals with the interactions between local tokens [24]. We find it possible to fine-tune the pretrained CogLM using local attention and textual attention, which is generally compatible with the global attention weights from pretraining. However, 2D local attention cannot be implemented efficiently using high-level framework, e.g. Pytorch [20]. We develop a customized CUDA kernel to support both 2D local attention, 2D autoregressive local attention and cross-resolution local attention. In the CUDA kernel implementation, we can save half of the computation in the matrix multiplication and do not need a causal attention mask for the autoregressive attention. In the super-resolution modules, we use local attention with the receptive field (RF) of 9⇥ 9. Figure 6 show the benchmark for a single-head attention with hidden size 64 on a A100 GPU. The advantage of our method will be more obvious in autoregressive scenarios, which is up to 40⇥ faster and consumes 1% memory than global attention on 4,096 sequences. 4.3 Upweighting Textual Attention Most text-image pairs are weakly relevant in the large training data of CogLM. Even the model perfectly fits the data, it should have a considerable probability to generate irrelevant images. To strengthen the relevance, we leverage the explainability of the attention operation. We add a constant c to the attention scores from any token to the text tokens: (The attention mask is omitted for simplicity) Attention(Q, K, V, A) = softmax( Q T Kp d +[ c ... c| {z } text part 0 ... 0| {z } image part ])V. (7) This technique costs ignorable time consumption but largely improves the textual relevance of the generated images. In practice, c < 3 will not influence the quality of the images. 5 Experiments 5.1 Dataset Our dataset for pretraining contains about 30 million text-image pairs, mostly overlapping with that of CogView [3]. We filter about 5 million text-image pairs from the CogView dataset with some keywords, e.g. “abstract” and “texture”, because they are mostly background images used for design. These images consist of repeating patterns and contribute little to text-to-image generation. We then replenish the dataset with 5 million tag-image pairs. About half the text is translated from English, and both Chinese and English text are kept to train our bilingual CogLM. Only the images whose resolution is at least 480⇥ 480 are used to train the super-resolution modules. 5.2 Machine Evaluation To compare with previous and concurrent works, we follow the most popular benchmark originated from DALL-E [26], Fréchet Inception Distances and Inception Scores evaluated on MS-COCO [17]. 30,000 captions from the validation set are sampled to evaluate the FID. Since each image in COCO has up to 5 different captions, we carefully select the sampled captions to describe different images. We generate 16 samples for each caption (translated into Chinese), and select the best one with the lowest caption perplexity (the Caption Score in [3]). Note that FID is not the perfect metric to evaluate CogView2 because (1) the advantage of CogView2 is to generate high-resolution images, but we need to resize the images back to 256⇥ 256 for meaningful comparison. (2) There are mistakes when translating English captions into Chinese. (3) Our training data contain many single-object images, which are quite different from those in the distribution of COCO (common objects in context). The results of machine evaluation are demonstrated in Table 1. We find that fine-tuning CogLM on the MS-COCO dataset will largely improve the FID. During our fine-tuning, FID diminishes from 24.0 (0 iteration)! 19.2 (2,500 iterations) ! 17.5 (7,500 iterations). However, we find that the quality (human evaluation) of generation deteriorates. Though the style is similar to COCO, the generation is not as accurate as for the non-fine-tuned version, which also corresponds to the scores in human evaluation in Figure 7. 5.3 Human Evaluation As the most persuasive metric, we conduct a large-scale human evaluation following the setting in CogView [3] (See Appendix for details). The experiments include a total of 4,600 groups of comparison on COCO captions between some public available text-to-image works, including DFGAN [28], LAFITE [39], CogView [3], CogView2 (including its finetuned version on COCO) and the recovered ground truth after VQVAE. Note that the VQVAE in CogView2 is much better than that in CogView, which makes the recovered ground truth a stronger upper bound. The results are demonstrated in Figure 7. An intriguing finding is that the finetuned CogView2, although with much better FID, performs worse than the original model. We guess that the model might fit the style of complex scenes in COCO, but the generated samples with isolated subjects could be preferred by the annotators. 5.4 Analysis of the Speed and FLOPs of LoPAR As discussed in § 1, our motivation is to increase the degree of parallelism for inference acceleration, even with more FLOPs. Autoregressive generation with cached hidden states have the same FLOPs with a teacher-forcing forward step, but is much slower (858ms vs 225.9s in CogView2 scale). For LoPAR, it is exactly N (N = 6 in our setting) times and FLOPs of forward steps. We compare the inference speed of super-resolution stage with different strategies in Table 2. 6 Discussion Autoregressive or Diffusion? Although GPTs achieved great success in text generation, diffusion models are becoming increasingly popular in image generation. Here we compare diffusion models with autoregressive models from the aspect of speed, the largest disadvantage of the autoregressive models discussed in the section 1. With the same architecture, diffusion models require more FLOPs but have a high degree of parallelism. They can also make a trade-off between the quality and time consumption by manually scheduling the stride of sampling. For example, Glide [19] samples 250 diffusion steps for evaluation, and 27 steps for interactive sampling, to reduce the latency to 15s. Autoregressive models must generate the image token-by-token, but our LoPAR can upsample the image with a high parallelism degree, so that (potentially) we can reduce the time cost by introducing more hierarchies to design models much faster than diffusion models. Comparison between DALL-E-2 and CogView2. DALL-E-2 [27] is a recently released concurrent work for text-to-image generation on 1024⇥ 1024 resolution. Its probabilistic model and architecture are quite different from those in CogView2. But both models share the same spirit – hierarchical generation. The difference is that DALL-E-2 adopts an additional third-level super-resolution and a generation prior, which help result in potential quality gain, but also lead to expensive resourceconsuming. CogView2 is able to synthesize similar scenes according to the limited demos of DALL-E-2, e.g. “lion teacher” (Figure 1) vs. “panda scientist” (DALL-E-2), considering CogView2 is trained using only 5% of the total data (650M text-image pairs) by DALL-E-2. For future, CogView2 can also adopt the third-level super-resolution and the prior, though it is engineering mostly. 7 Conclusion The breakthrough in the text-to-image domain is made by autoregressive models. However, the slow generation and high complexity hinder researchers attempts to improve the quality in this direction. In this paper, we put forward an approach based on hierarchical transformers to help autoregressive models remedy these disadvantages, and bridge the gap between text-to-image pretraining and recent visual representation learning methods. Broader Impact. The advancement of text-to-image generation, especially text-guided image editing, will ease the creative efforts of artists and designers, while also causing a risk of misinformation, leading to permanent damages to the reliability of web photos. However, it is possible to train a classifier to distinguish the real and CogView2-generated images according to the texture features. Acknowledgments and Disclosure of Funding We would like to thank Zhao Xue and Sha Yuan for the help on collecting the dataset, Hanxiao Qu for maintaining the machines, and Yue Cao and Chang Zhou for their useful discussion, Zhendong Zhang for releasing an initial version of CUDA local attention. Funding in direct support of this work: GPU hours donated by BAAI, NSFC for Distinguished Young Scholar (61825602).
1. What are the main contributions and strengths of the paper regarding text-to-image generation? 2. What are the weaknesses and limitations of the paper, particularly regarding its claims and comparisons with other works? 3. How does the reviewer assess the quality and novelty of the proposed approach in the context of previous research? 4. What are some potential applications and future directions for improving the model's performance and capabilities? 5. Can you provide additional suggestions or nitpicking comments to enhance the clarity and readability of the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a pretraining method, a Cross-Modal General Language Model (CogLM), that masks both image and text tokens in input and learns to predict them in an autoregressive manner, while handling bidirectional context. By fine-tuning a pretrained transformer with this approach, the authors construct a hierarchical model, CogView2, which first maps a generated image into a larger image (direct super-resolution) and refines local patches (local parallel autoregressive), thus improving resolution as well as inference speed. The experiments in the paper suggest that CogView2 performs comparable to other models despite its smaller model size and training data size. Meanwhile, the approach results in a considerable reduction in model run times (e.g. 10x faster than its predecessor, CogView). Training and evaluation in this paper considers text in both English and Chinese. Strengths And Weaknesses Strengths The paper is clearly written and aided with great visualizations. Overall, the problems and proposed solutions are well-motivated and supported with appropriate evidence or justification (e.g. findings from existing literature, their own empirical observations, or limitations due to compute resources). The topic is timely; the effort for making the task of text-to-image generation faster and better is of great interest to the community. Their approach of generating a low-resolution image and refining it to be a high-resolution image is simple and straightforward. The use of various techniques is adequately justified (e.g. masking strategy and attention mask) and ablated (e.g. clustering sampling and attention upweighting). “Faster” text-to-image generation: One of the main contributions of this paper is to make text-to-image generation faster. Although the experiment section doesn’t directly compare different models’ inference time in an end-to-end fashion, the last paragraph of the introduction section mentions that their model run time for local parallel autoregressive generation is 600x faster and overall 10x faster than their previous model. “Better” text-to-image generation: The experiments follow popular benchmarking practice and discuss the gap between automatic metrics and human evaluation. With automatic metrics, CogView2 performs comparable to other methods on MS-COCO based on Frechet Inception Distances. Based on human evaluation, CogView2 performs better on all metrics (image clarity, texture quality, and relevance to the caption) than CogView, Lafite, and DF-GAN. Weaknesses “Better” text-to-image generation: As the authors acknowledged in the paper, the automatic metrics on MS-COCO may not be the best way to evaluate these models, hence the claim of CogView2 being “better” (in the title) or competitive (in the abstract) compared to other models such as DALLE2 can use some scoping/hedges. Outside of this paper, there have been some informal qualitative comparisons between several models, which seem to give the impression that CogView2 is not strictly better than other models. https://huggingface.co/spaces/THUDM/CogView2 https://twitter.com/bhagatsurya2/status/1542824988092530689 https://www.reddit.com/r/MachineLearning/comments/vkvq0j/r_cogview2_faster_and_better_texttoimage/ Bilinguality: Since this paper considers text in both English and Chinese and notes that “Chinese input produces better results than English input” (https://huggingface.co/spaces/THUDM/CogView2), it would be interesting to see more in-depth analysis on this bilingual aspect of the model. The authors state that they used [BOE] to denote the beginning of English text and [BOC] for Chinese text, but do not justify the decision or discuss any findings based on two languages. Questions Questions What were the main challenges/blockers for directly comparing different models’ inference time in an end-to-end fashion? Suggestions Discussion on failure modes: Even from the cherry picked examples in Figure 1, there are multiple failure modes observed (e.g. different numbers or lengths of fingers). What are the main types of failure modes the authors or human annotators observed? Any difference when it’s in English vs. Chinese? Nitpicking In Figure 2: “Supports tokenization of both Image Chinese and English” → “Supports tokenization of both images and texts in Chinese and English” In Section 2: “DF-GAN, et al.” → “and DF-GAN.” In Section 2 and throughout the paper: use either “VQ-VAE” or “VQVAE” to be consistent In Section 3.1: consider moving the paragraph “In NLP, the General Language Model [...]” to Related Work In Section 3.1: l and r are not defined In Section 3.1: “where [BOE], [BOC] are separators meaning beginning-of-English and beginning-of-Chinese” → “where [BOE] and [BOC] are separators to indicate the beginning of English text and that of Chinese text” In Section 3.1: “Ideally, the two tasks should be separated” is not justified In Section 3.2: “Image, Chinese and English” doesn’t really type check; maybe it should be “Image and Text in Chinese and English”? In Section 5.2: “Frechet Inception Distances and Inception Scores” → “Frechet Inception Distances (FID) and Inception Scores (IS)” In Section 6: clarify “third-level super-resolution) In Section 7: “it is possible to train a classifier to distinguish the real and CogView2-generated images according to the texture features” is not supported with any evidence Limitations I think some scoping about text is necessary. The paper generally assumes that input text can be any text in English and Chinese; their github repository explicitly says “any text” (https://github.com/THUDM/CogView2). For preciseness, it would be helpful to note any potential/practical limitations more clearly. For instance, CogLM can accept up to 111 text tokens (Section 3.2). And presumably, there aren’t that many short text inputs (e.g. one or two words) in the training data – then what would be a reasonable minimum length for the text input for the model to perform well? More generally, based on the types of images and texts in the training data, CogLM may perform better for certain kinds of images and texts. This insight can greatly help the future use of this pretrained model. This applies to images as well. The fact that this paper only considers square images (based on N^2 notation) is not explicitly addressed.
NIPS
Title CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers Abstract Development of transformer-based text-to-image models is impeded by its slow generation and complexity, for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel autoregressive generation. We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, a cross-modal general language model (CogLM), and finetune it for fast super-resolution. The new text-to-image system, CogView2, shows competitive generation performance to the concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images. A lion man is typing in the office. A beautiful girl is hugging a husky. A lion teacher wearing a suit is in front of a blackboard. A robot is riding under the blue and cloudy sky. Several youths are talking in a bar. A young woman is taking photos. A tiger with angel’s wings. A girl holding an oil-paper umbrella in a rainy lane. Earth in the Eye. A magnificent church. Sketch. Mount Fuji, cherry blossom and Akita dog. Oil painting. A pirate captain with a skull. Figure 1: Text-to-Image samples from CogView2, which supports both Chinese and English. The actual input text is in Chinese, translated into English here for better understanding. Codes and a demo website will be updated at https://github.com/THUDM/CogView2. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). N/A 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1 Introduction Recently, text-to-image generation has been greatly advanced by large-scale pretrained transformers, e.g. DALL-E [26] and CogView [3]. These models learn to generate image tokens in an autoregressive way. However, they also suffer from the following disadvantages: Slow generation. Generation of autoregressive models usually is much slower than generation of non-autoregressive models, e.g. GANs [10], with the same FLOPs. Instead of employing a large number of parameters, this shortcoming is mainly attributed to the nature of token-by-token generation used in the autoregressive models cannot exploit the parallel computing ability of GPUs, even after caching hidden states [25]. This is a significant limitation. Expensive high-resolution training. The current large-scale pretrained models are generally based on Transformers [30], where the attention operation has both time and space complexity of O(n2) for training sequences of length n. Within a limited budget, we face a trade-off between the number of parameters, representing the modeling power, and the resolution of the generated images. For this reason, most current text-to-image models choose a resolution of 32⇥ 32 tokens (usually 256⇥ 256 pixels) [3, 26, 11], which is far less dense than the resolution of the real photos. Unidirectionality. For images, autoregressive models, e.g. GPTs, usually generate tokens in rasterscan order. This order shows the best perplexity during the evaluation [7]. However, this order makes the models unaware of the tokens below or on the right side during generation, as a result text-guided infilling is not supported. Moreover, the unidirectionality leads to a gap between the pretrained text-to-image models and vision transformers (ViTs) [5] based on bidirectional masked prediction, e.g. MAE [12] and SimMIM [34]—limiting their application on traditional visual tasks, such as image classification and object detection. Present Work. To overcome these defects, we first propose a simple and versatile pretraining method, a Cross-Modal general Language Model (CogLM). Our CogLM masks various types of tokens in the sequence of text and image tokens, and learns to predict them autoregressively. Specifically, (1) if we mask all the image tokens, the task becomes the same as the original CogView [3] in performing a text-to-image generation task; (2) if we mask random patches of image tokens, it works similarly to MAE as an infilling task; (3) if we mask text tokens, the task becomes image captioning. The versatility of CogLM enables us to fine-tune a pretrained CogLM for different downstream tasks, and constructs a hierarchical model, CogView2.There are three steps in the hierarchical generation process as follows: 1. First, we generate a batch of low-resolution images (20⇥ 20 tokens in CogView2) using the pretrained CogLM, and then (optionally) filter out the bad samples based on the perplexity of CogLM image captioning, which is the post-selection method introduced in CogView [3]. 2. The generated images are mapped into 60⇥ 60-token images by a direct super-resolution module fine-tuned from the pretrained CogLM. We use local attention implemented by our customized CUDA kernel to reduce the training expense. The high-resolution images from this step usually have inconsistent textures and lack details. 3. These high-resolution images are refined via another iterative super-resolution module finetuned from the pretrained CogLM. Most tokens are re-masked and re-generated in a local parallel autoregressive (LoPAR) way, which is much faster than the original autoregressive generation. How does CogView2 conquer the three defects? First, during pretraining the masked patch prediction task trains CogLM to handle bidirectional context, making it easy to adapt to bidirectional tasks, such as direct and iterative super-resolution. Second, the hierarchical design allows us to care only about the local coherence at a high-resolution level. In this way, the local attention can be leveraged to reduce the training expense. Third, the local parallel autoregressive generation can reduce model run times from 3,600 to 6 (1/600 only), significantly accelerating the generation of high-resolution images. CogView2 is about 10⇥ faster than the CogView (with sliding-window super-resolution) for generating images of similar resolution and better quality. Image) is the separator token. Mask regions are sampled according to different strategies. Only the second-to-last tokens in the mask regions are predicted to compute the loss. (Right) The mask regions are only implemented by changing the attention mask matrix, without any modification on the input tokens. In the attention mask matrix, rows and columns of all the masked tokens (the 2,3,4,6,7,8 rows and columns) can be extracted together to form a low-triangle attention mask matrix. 2 Related Work Text-to-image generation for arbitrary inputs is a long-held dream for many cross-modal machinelearning researchers. Most early attempts to address this challenge were based on Generative Adversarial Nets [10]; these include AttnGAN [35], DM-GAN [40], DF-GAN [28], et al. Although they can perform vivid synthesis on domain-specific datasets, such as Caltech-UCSD Birds 200, general-domain datasets, such as MS COCO [17], present great challenges for these methods. DALLE [26], CogView [3] and similar works [33, 8] leverage VQ-VAE [29] to compress an image to a sequence of discrete tokens and pretrain large transformers for autoregressive generation, greatly improving results in the general domain. LAFITE [39] learns to invert the pretrained CLIP [23] embeddings in the shared space of text and image for text-free training. Recently, many researchers have turned to diffusion models, largely due to the slow generation defect of autoregressive models. One example is Glide [19]. Non-autoregressive generation (NAR) is recently a popular topic in natural language generation— see Mask-Predict [9] and GLAT [21], which explores parallel decoding methods for autoregressivelike models. Generation speed was not an issue in the era when GANs dominated the image generation, but constitutes a considerable challenge for current autoregressive text-to-image models. M6-UFC [38] first introduces NAR methods into the VQ-VAE framework, and similar ideas are adopted by VQ-diffusion [11] and MaskGIT [1]. A possible drawback of pure NAR methods is that tokens sampled at the meantime might lead to global inconsistency in later steps during the generation of complex scenes. Our method introduces a hierarchical design to combine the consistency merit of autoregressive models and the speed advantage of NAR methods. 3 Method 3.1 The Cross-Modal General Language Model While previous self-supervised pretext tasks often target at mask prediction in the computer vision [34, 12], our approach pursues a unification of autoregressive generation and bidirectional context-aware mask prediction. In NLP, the General Language Model (GLM) [6] suggests changing the direct mask prediction into blockwise autoregressive generation. However, directly applying it to images would result in redundancy. For instance, the sizes of the masked image patches are fixed, thus we do not need the capacity of filling blocks of indefinite length as in NLP. Moreover, GLM inserts a sentinel token for each mask region to predict its first token, which greatly increases the sequence length thus restricts the usage of 2D local attention. Based on the analysis above, we present a simpler and more general language model for both text and image data—Cross-modal general Language Model (CogLM). As shown in Figure 2, CogLM takes as input a concatenation of text and images tokenized by icetk 1 (See § 3.2), whose dictionary contains 20,000 image tokens and 130,000 text (both Chinese and English) tokens. Formally, let t = [t1, ..., tM ] be the text tokens and im = [im1, ..., imN2 ] be the image tokens, where M and N2 are the lengths of text and image tokens respectively. The crucial step in CogLM is to sample k mask regions R = {[l0, r0], ..., [lk, rk]} according to various strategies. In practice, the following two strategies are used: • (Text-to-Image GPT) The input sequence is x = [t [BOI] im ]. We mask all the image tokens, which is similar to the pretraining task of CogView [3]. • (A Combination of Mask Prediction and Image Captioning) The input sequence is x = [im0 ... imi ... imj ... imN2 [BOE/C] t ], where [BOE],[BOC] are separators meaning beginning-of-English and beginning-of-Chinese used for the corresponding language. we mask random patches and the text tokens. Ideally, the two tasks should be separated; but we combine them together for training efficiency. Instead of replacing the tokens in the mask regions as [MASK], we make no change in the input but build an attention mask A based on the mask regions. All tokens outside mask regions are seen as context and can be attended to by all other tokens. A token in mask regions can only be attended to by the tokens in mask regions and behind it. Specifically, A[i, j] = 8 < : 1, if 8 [lu, ru] 2 R, j /2 [lu, ru], 1, if j i and 9 u, v (indices), i 2 [lu, ru] 2 R, j 2 [lv, rv] 2 R, 0, else. (1) Figure 2 shows an example of the attention mask matrix of two mask regions. In the mask regions, the model learns to predict the next token. The loss function can be written as follows: L = 1P u ru lu X v rv 1X i=lv log p(xi+1|xi, xcontext), (2) where the xcontext denotes the tokens outside the mask regions. Infilling. Note that the first token in each mask region is not predicted during training. This feature seems to disable CogLM from image infilling or cloze filling in natural language, but this problem actually has a simple solution. During inference, we can move the last context token before each mask region into it, as illustrated in Figure 3. Although these moved tokens becomes blind spots for mask regions before them, they have few negative effects in practice. To further avoid this minor influence and fully maintain the context information, we deal with each mask region individually. For each region, we move only the last context token before this region, and keep all the known tokens outside the mask regions. Thus, we cannot use the cached hidden states from the previous region, slightly slowing down the multi-region infilling. See Appendix A for samples. Advantages over GPT [22], GLM [6] and MAE [12]. (GPT) The main advantage over GPT is that the modeling of bidirectional contexts is considered in CogLM, 1 http://github.com/THUDM/icetk which will benefit many tasks relying on global information, e.g. super-resolution in the next section and image classification. The importance of bidirectional context has been verified in the comparison of BERT [2] and GPT on GLUE [31]. (GLM) The main advantage over GLM is simplicity. To unify the generation and bidirectional understanding, GLM needs to define many new special tokens and a new type of position embedding, insert a sentinel for each mask region and change the order of input tokens. It destroys the spatial relevance in the image data and excludes the possibility of using 2D local attention or convolution. (MAE) MAE is designed for self-supervised learning on pure image data and is not ready for generation. Even without text, CogLM is more parameter-efficient, because MAE is an encoder-decoder structure. A considerable part of parameters in encoders and decoders are learned for the same function, e.g. extracting basic features from inputs. 3.2 Pretraining As we have introduced CogLM as a general pretraining framework, in this section, we will describe the details and hyperparameters of our pretrained CogLM. Tokenization. We have developed a unified tokenizer icetk of Image, Chinese and English. As shown in DebertaV2 [13], a large vocabulary (128,000 tokens) offers many benefits. For text, we extract a bilingual vocabulary of 130,000 tokens in icetk and explicitly classify them as Chinese, English, Common or Rare Symbols, so that we can specify the generated language via a sampling mask. The image tokenizer is a 20,000-token first-stage VQ-VAE [29], largely following the tokenizer in CogView [3]. Inspired by Esser et al. [7], a term of perceptual loss [37] is added to the reconstruction loss, significantly improving reconstruction performance. (See Appendix for details.) Transformer. The backbone of our pretrained CogLM is a Transformer with Sandwich LayerNorm [3]. The model has 6 billion parameters (48 layers, hidden size 3072, 48 attention heads), trained for 300,000 iterations in FP16 with batch size 4,096. The sequence length is 512, consisting of 400 image tokens, 1 separator and up to 111 text tokens. Masking Strategy. We randomly select a sampling strategy for each training sample. For the mask prediction strategy, the analysis from SimMIM [34] exhibits the great importance of mask percentage and patch distribution. We follow their results to sample 4⇥ 4 token patches at random until 75% of the tokens are in the mask regions. For bilingual samples, we randomly choose one of the languages during training. 3.3 Hierarchical Generation Although the pretrained CogLM can generate images from text, the resolution is only 20⇥ 20 tokens (160⇥ 160 pixels). The short sequence is intentional, for fast generation. The versatility of CogLM allows us to fine-tune it into super-resolution models. The whole hierarchical pipeline makes up our CogView2 system. Direct super-resolution. In this step, we want a model to map a generated low-resolution image token sequence im0 2 [0, 20000)20⇥20 to a higher-resolution sequence im1 2 [0, 20000)60⇥60. We fine-tune the pretrained CogLM into an encoder-decoder architecture. The input of the encoder is the 20⇥ 20 sequence of generated image tokens, and the input of the decoder is just a 60⇥ 60 sequence of [MASK]. We do not follow the original transformer [30] to add a cross-attention layer, instead we make the tokens in the decoder attend both local tokens in decoder and encoder. This cross-resolution local attention is implemented via a customized CUDA kernel introduced in section 4.2. Both encoder and decoder are initialized using the pretrained CogLM. In practice, we find it enough to only fine-tune the weights of the attention layers in the decoder, so that we can fix and share the other parameters between the encoder and decoder to reduce the memory consumption. Although direct mapping is a traditional practice for super-resolution—e.g. SRCNN [4]—it is hardly qualified as generation; it focuses more on texture transformation. The loss function of direct mapping is token-based or pixel-based (MAE), meaning that it predicts or maximizes the marginal distribution p(im1i |im 0) for each token i instead of p(im1|im0). As we use cross-entropy loss and a multinomial sampling during generation, we get im1 = [im11, ..., im 1 60⇥60], im 1 i ⇠ p✓(im1i |im 0), im1i and im 1 j are independent if i 6= j. (3) Input text: A great church. Mask 75% Local window details. Direct Super- Resolution. (60*60 tokens) CogLM (20* 20 tokens) Iterative super-resolution. All the local windows generate simultaneously. Figure 4: Super-resolution modules. Low-resolution images are mapped into high-resolution images via the direct super-resolution module. In each snapshot during the iterative super-resolution, all tokens of the same color are generated at the same time. All the local windows work in parallel. Therefore, we need to refine im1 using another module. Iterative super-resolution. In this step, we aim to refine the initial high-resolution sequence im1 into a better one im2. The working principle of the refinement is to break the independence of the generated tokens, while keeping the parallelism. Thus, we propose a local parallel autoregressive (LoPAR) approach. The motivation of LoPAR is that the hierarchical process frees us from global dependence. As long as we maintain 25% – a ratio from MAE [12] – random tokens as context, it is sufficient to recover the global scene of the image. If the re-generated tokens are coherent locally with 25% kept tokens, global coherence is also guaranteed. We mask 75% of the tokens of im1 and assume that there is a local window size , p(im2i |im 1) = p(im2i |{im 1 j | dist(i, j) < and j is not masked.}), (4) p(im2i |im 1 , im2j ) = p(im 2 i |im 1) if dist(i, j) > , (5) so that local attention is sufficient and tokens from different local windows can be generated in parallel. To further increase the parallelism, we find the local inconsistency usually occurs when directly adjacent (vertically or horizontally) tokens are generated at the same time. We factorize the generation process into different iterations diagonally as in Figure 4 and below: p(im2|im1) = 2 1Y k=0 row(i)+col(i)=kY i p(im2i |im 1 , {im2j | row(j) + col(j) < k}) , (6) where row(i) = b i 160 c mod and col(i) = (i 1) mod are the indices of row and column in the local window. To implement the iterative super-resolution module, we fine-tune the pretrained CogLM for 20,000 iterations into a BERT-style masked prediction model on 60⇥60-token sequences with local attention. The mask ratio is sampled from {0.2, 0.4, 0.6, 0.8, 0.9} for each sample. During inference, we set the local window size to = 6 and compress the iterative process from 2 1 to 6 iterations by arranging the unmasked tokens and merging the first and final iterations2. 2Implemented by a manually designed 6⇥ 6 matrix. Details are included in our released codes. 4 Plug-in Improved Techniques for Transformers 4.1 Cluster Sampling In autoregressive generation, the sampling strategy over the predicted distribution of the tokens is crucial. Top-k or top-p (nucleus) sampling [14] are the most common strategies, but suffer from an incomplete truncation problem. The vocabulary of the image tokens is learned by VQVAE [29], where the embeddings of some tokens are very similar. To represent the frequent patterns at a finer granularity, we use a large vocabulary of 20,000 tokens, three times larger than that of the previous works [26, 3], further exacerbating the situation. For instance, there are about 42 tokens basically “white” in icetk, which show subtle differences only when connected to some other tokens. Although the sum of the probabilities of these “white” tokens might be large enough, most of them could be filtered by top-k sampling. Figure 5 illustrates the problem. To solve the incomplete sampling problem, we propose cluster sampling. We group the 20,000 tokens into 500 clusters via Kmeans [18] based on their vectors in VQVAE. During sampling, we first sample a cluster using top-k sampling based on the sum of probabilities of tokens in the clusters, and then sample in the cluster. All the tokens within a cluster are treated as a whole and will be filtered or kept together, alleviating the incomplete truncation problem. 4.2 Local Attention Locality is one of the most important properties of image data. Local operations, e.g. convolution, dominated the visual computing before ViTs [5]. Even attention in the ViTs mainly deals with the interactions between local tokens [24]. We find it possible to fine-tune the pretrained CogLM using local attention and textual attention, which is generally compatible with the global attention weights from pretraining. However, 2D local attention cannot be implemented efficiently using high-level framework, e.g. Pytorch [20]. We develop a customized CUDA kernel to support both 2D local attention, 2D autoregressive local attention and cross-resolution local attention. In the CUDA kernel implementation, we can save half of the computation in the matrix multiplication and do not need a causal attention mask for the autoregressive attention. In the super-resolution modules, we use local attention with the receptive field (RF) of 9⇥ 9. Figure 6 show the benchmark for a single-head attention with hidden size 64 on a A100 GPU. The advantage of our method will be more obvious in autoregressive scenarios, which is up to 40⇥ faster and consumes 1% memory than global attention on 4,096 sequences. 4.3 Upweighting Textual Attention Most text-image pairs are weakly relevant in the large training data of CogLM. Even the model perfectly fits the data, it should have a considerable probability to generate irrelevant images. To strengthen the relevance, we leverage the explainability of the attention operation. We add a constant c to the attention scores from any token to the text tokens: (The attention mask is omitted for simplicity) Attention(Q, K, V, A) = softmax( Q T Kp d +[ c ... c| {z } text part 0 ... 0| {z } image part ])V. (7) This technique costs ignorable time consumption but largely improves the textual relevance of the generated images. In practice, c < 3 will not influence the quality of the images. 5 Experiments 5.1 Dataset Our dataset for pretraining contains about 30 million text-image pairs, mostly overlapping with that of CogView [3]. We filter about 5 million text-image pairs from the CogView dataset with some keywords, e.g. “abstract” and “texture”, because they are mostly background images used for design. These images consist of repeating patterns and contribute little to text-to-image generation. We then replenish the dataset with 5 million tag-image pairs. About half the text is translated from English, and both Chinese and English text are kept to train our bilingual CogLM. Only the images whose resolution is at least 480⇥ 480 are used to train the super-resolution modules. 5.2 Machine Evaluation To compare with previous and concurrent works, we follow the most popular benchmark originated from DALL-E [26], Fréchet Inception Distances and Inception Scores evaluated on MS-COCO [17]. 30,000 captions from the validation set are sampled to evaluate the FID. Since each image in COCO has up to 5 different captions, we carefully select the sampled captions to describe different images. We generate 16 samples for each caption (translated into Chinese), and select the best one with the lowest caption perplexity (the Caption Score in [3]). Note that FID is not the perfect metric to evaluate CogView2 because (1) the advantage of CogView2 is to generate high-resolution images, but we need to resize the images back to 256⇥ 256 for meaningful comparison. (2) There are mistakes when translating English captions into Chinese. (3) Our training data contain many single-object images, which are quite different from those in the distribution of COCO (common objects in context). The results of machine evaluation are demonstrated in Table 1. We find that fine-tuning CogLM on the MS-COCO dataset will largely improve the FID. During our fine-tuning, FID diminishes from 24.0 (0 iteration)! 19.2 (2,500 iterations) ! 17.5 (7,500 iterations). However, we find that the quality (human evaluation) of generation deteriorates. Though the style is similar to COCO, the generation is not as accurate as for the non-fine-tuned version, which also corresponds to the scores in human evaluation in Figure 7. 5.3 Human Evaluation As the most persuasive metric, we conduct a large-scale human evaluation following the setting in CogView [3] (See Appendix for details). The experiments include a total of 4,600 groups of comparison on COCO captions between some public available text-to-image works, including DFGAN [28], LAFITE [39], CogView [3], CogView2 (including its finetuned version on COCO) and the recovered ground truth after VQVAE. Note that the VQVAE in CogView2 is much better than that in CogView, which makes the recovered ground truth a stronger upper bound. The results are demonstrated in Figure 7. An intriguing finding is that the finetuned CogView2, although with much better FID, performs worse than the original model. We guess that the model might fit the style of complex scenes in COCO, but the generated samples with isolated subjects could be preferred by the annotators. 5.4 Analysis of the Speed and FLOPs of LoPAR As discussed in § 1, our motivation is to increase the degree of parallelism for inference acceleration, even with more FLOPs. Autoregressive generation with cached hidden states have the same FLOPs with a teacher-forcing forward step, but is much slower (858ms vs 225.9s in CogView2 scale). For LoPAR, it is exactly N (N = 6 in our setting) times and FLOPs of forward steps. We compare the inference speed of super-resolution stage with different strategies in Table 2. 6 Discussion Autoregressive or Diffusion? Although GPTs achieved great success in text generation, diffusion models are becoming increasingly popular in image generation. Here we compare diffusion models with autoregressive models from the aspect of speed, the largest disadvantage of the autoregressive models discussed in the section 1. With the same architecture, diffusion models require more FLOPs but have a high degree of parallelism. They can also make a trade-off between the quality and time consumption by manually scheduling the stride of sampling. For example, Glide [19] samples 250 diffusion steps for evaluation, and 27 steps for interactive sampling, to reduce the latency to 15s. Autoregressive models must generate the image token-by-token, but our LoPAR can upsample the image with a high parallelism degree, so that (potentially) we can reduce the time cost by introducing more hierarchies to design models much faster than diffusion models. Comparison between DALL-E-2 and CogView2. DALL-E-2 [27] is a recently released concurrent work for text-to-image generation on 1024⇥ 1024 resolution. Its probabilistic model and architecture are quite different from those in CogView2. But both models share the same spirit – hierarchical generation. The difference is that DALL-E-2 adopts an additional third-level super-resolution and a generation prior, which help result in potential quality gain, but also lead to expensive resourceconsuming. CogView2 is able to synthesize similar scenes according to the limited demos of DALL-E-2, e.g. “lion teacher” (Figure 1) vs. “panda scientist” (DALL-E-2), considering CogView2 is trained using only 5% of the total data (650M text-image pairs) by DALL-E-2. For future, CogView2 can also adopt the third-level super-resolution and the prior, though it is engineering mostly. 7 Conclusion The breakthrough in the text-to-image domain is made by autoregressive models. However, the slow generation and high complexity hinder researchers attempts to improve the quality in this direction. In this paper, we put forward an approach based on hierarchical transformers to help autoregressive models remedy these disadvantages, and bridge the gap between text-to-image pretraining and recent visual representation learning methods. Broader Impact. The advancement of text-to-image generation, especially text-guided image editing, will ease the creative efforts of artists and designers, while also causing a risk of misinformation, leading to permanent damages to the reliability of web photos. However, it is possible to train a classifier to distinguish the real and CogView2-generated images according to the texture features. Acknowledgments and Disclosure of Funding We would like to thank Zhao Xue and Sha Yuan for the help on collecting the dataset, Hanxiao Qu for maintaining the machines, and Yue Cao and Chang Zhou for their useful discussion, Zhendong Zhang for releasing an initial version of CUDA local attention. Funding in direct support of this work: GPU hours donated by BAAI, NSFC for Distinguished Young Scholar (61825602).
1. What is the focus and contribution of the paper on text-to-image generation? 2. What are the strengths of the proposed approach, particularly in hierarchical generation and customized CUDA kernel? 3. What are the weaknesses of the paper, especially regarding model improvement and experiment support? 4. Do you have any questions regarding the visualization and quality of generated images? 5. What are your concerns regarding the masking strategy and its ablation study? 6. Why did the authors not include DALL-E as part of the human quality evaluation?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a faster and better text-to-image generation model called CogView2. Compared to the CogView baseline, main contributions are: 1) hierarchical generation that upsamples the original low-resolution image tokens and then refine them, 2) customized CUDA kernel that speeds up training, 3) a special attention masking strategy used in CogLM during training. Experiments clearly demonstrate that CogView2 generates better images than CogView both in terms of FID scores and also in terms of human evaluation. Strengths And Weaknesses Strength: The paper is mostly easy to follow. Authors’ proposed idea of using super-resolution and refinement modules to hierarchically generate higher resolution images is intuitive and works well in practice. Designing these two modules is clearly non-trivial work. Authors further demonstrate that clustering sampling is better than simple top-k. I also appreciate that they are willing to write a customized cuda kernel to speed up training. The speed-up seems to be significant in the autoregressive case. Weakness: Model improvement is limited compared to CogView. Both use Transformer to jointly learn the likelihood of text tokens and image tokens. The main difference is the new masking strategy in CogLM where tokens inside the mask region are trained to predict the next token based on past masked-tokens and non-masked context tokens. Authors stated that this approach unifies autoregressive generation and bidirectional prediction, but I find this design lacking justification. Experiment section also fails to provide any ablation study to support the choice of this particular masking strategy. Even with the new hierarchical design, CogView2 still generates image resolution lower than other works such as DALL-E-2 [1] and Imagen [2]. Stacking another direct/iterative super-resolution module is straightforward and should solve the issue. Resource limitation is indeed a problem but still I need to point out the lower resolution as part of the weakness. Only a few hand-picked visualizations are provided in the main paper and in the supplement. This makes it very hard to qualitatively judge the performance. [1] Ramesh, Aditya, et al. "Hierarchical text-conditional image generation with clip latents." arXiv preprint (2022). [2] Saharia, Chitwan, et al. "Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding." arXiv preprint (2022). Questions (1) It will be great if authors can provide more visualizations, especially those with higher resolution. I can not judge the quality of generated images from just a few hand-picked ones. (2) What is FID-k in Table 1? I assume it means radius of the Gaussian Filter but couldn’t find any explanation in the text. If my guess is correct then shouldn’t FID–0 be the most important number? Does this mean that CogView2 is worse than multiple baseline methods in terms of image quality? (3) As stated before, I have concerns about the masking strategy. I do not see a motivation for joint learning of autoregressive generation and bidirectional mask prediction. Authors are encouraged to provide more ablation studies to support the design of CogLM and explain why it is better than the training in CogView. (4) For human quality evaluation, I wonder why authors do not include DALL-E as part of the test? That should be a key baseline. Limitations Limitations and potential negative societal impact are adequately addressed by the authors.
NIPS
Title IMED-RL: Regret optimal learning of ergodic Markov decision processes Abstract We consider reinforcement learning in a discrete, undiscounted, infinite-horizon Markov Decision Problem (MDP) under the average reward criterion, and focus on the minimization of the regret with respect to an optimal policy, when the learner does not know the rewards nor the transitions of the MDP. In light of their success at regret minimization in multi-armed bandits, popular bandit strategies, such as the optimistic UCB, KL-UCB or the Bayesian Thompson sampling strategy, have been extended to the MDP setup. Despite some key successes, existing strategies for solving this problem either fail to be provably asymptotically optimal, or suffer from prohibitive burn-in phase and computational complexity when implemented in practice. In this work, we shed a novel light on regret minimization strategies, by extending to reinforcement learning the computationally appealing Indexed Minimum Empirical Divergence (IMED) bandit algorithm. Traditional asymptotic problem-dependent lower bounds on the regret are known under the assumption that the MDP is ergodic. Under this assumption, we introduce IMED-RL and prove that its regret upper bound asymptotically matches the regret lower bound. We discuss both the case when the supports of transitions are unknown, and the more informative but a priori harder-to-exploit-optimally case when they are known. Rewards are assumed light-tailed, semi-bounded from above. Last, we provide numerical illustrations on classical tabular MDPs, ergodic and communicating only, showing the competitiveness of IMED-RL in finite-time against state-of-the-art algorithms. IMED-RL also benefits from a light complexity. 1 Introduction We study Reinforcement Learning (RL) with an unknown finite Markov Decision Problem (MDP) under the average-reward criterion in which a learning algorithm interacts sequentially with the dynamical system, without any reset, in a single and infinite sequence of observations, actions, and rewards while trying to maximize its total accumulated rewards over time. Formally, we consider a finite MDP M = (S,A,p, r) where S is the finite set of states, A = (As)s∈S specifies the set of actions available in each state and we introduce the set of pairs XM = {(s, a) : s ∈ S, a ∈ As} for convenience. Further1, p : XM → P(S) is the transition distribution function and r : XM → P(R) the reward distribution function, with corresponding mean reward function denoted by m : XM → R. An agent interacts with the MDP at discrete time steps t ∈ N∗ and yields a random sequence (st, at, rt)t of states, actions, and rewards in the following way. At each time step t, the agent observes the current state st and decides the action at to take based on st and possibly past information, i.e. previous elements of the sequence. After playing at, it observes a reward rt ∼ r (st, at), the current state of the MDP changes to st+1 ∼ p (·|st, at) and the agent proceeds sequentially. In the average- 1Given a set E, P (E) denotes the set of probability distributions on E. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). reward setting, one is interested in maximizing the limit, 1T ∑T t=1 rt, when T → ∞, providing it exists. This setting is a popular framework for studying sequential decision making problems; it can be traced back to seminal papers such as those of Graves and Lai [1997] and Burnetas and Katehakis [1997] This theoretical framework allows to study the exploration-exploitation trade-off that arises from the sequential optimization problem a learner is trying to solve while being uncertain about the very problem it is optimizing. In this paper, one is interested in developing a sampling strategy that is optimal amongst strategies that aim at maximizing the average-reward, i.e. balancing exploration and exploitation in an optimal way. To assert optimality, we define the notion of regret and state a regret lower bound with the purpose of defining a theoretically sound notion of optimality that is problem-dependent. While regret defines the discrepancy to optimality of a learning strategy, a problem-dependent regret lower bound will formally assess the minimal regret that any learning algorithm must incur on a given MDP problem by computing a minimal rate of exploration. Because this minimal rate of exploration depends on the problem, it is said to be problem-dependent, as opposed to worst case regret study that can exist in the MDP literature (e.g. Jaksch et al. [2010]). Regret lower bounds currently exist in the literature when the MDP M is assumed to be ergodic2. Hence we hereafter make this assumption, in order to be able to compare the regret of our algorithm to an optimal bound. Similarly, to ensure fast enough convergence of the empirical estimate of the reward to the true mean, an assumption controlling the rate of convergence to the mean is necessary. Assumption 1 (Light-tail rewards). For all x ∈ XM, the moment generating function of the reward exists in a neighborhood of 0: ∃λx> 0,∀λ ∈ R such that |λ| < λx,ER∼r(x)[exp(λR)] < ∞. Policy Regret and ergodicity are defined using properties of the set of stationary deterministic policies Π(M) on M. On M, each stationary deterministic policy π : S → As defines a Markov reward process, i.e. a Markov chain on S with kernel pπ : s ∈ S 7→ p (·|s, π(s)) ∈ P (S) together with rewards rπ : s ∈ S 7→ r (s, π(s)) ∈ P (R) and associated mean rewards mπ : s ∈ S 7→ m (s, π(s)) ∈ R. The t-steps transition kernel of π on M is denoted ptπ. We denote pπ= lim T→∞ 1 T T∑ t=1 pt−1π : S → P(S) the Cesaro-average of pπ . A learning agent is executing a sequence of policies πt∈Π(M), t⩾1, where πt depends on past information (st′ , at′ , rt′)t′<t. With a slight abuse of notation, a sequence of identical decision rules, πt = π for all t, is also denoted π. Gain The cumulative reward (value) at time T , starting from an initial state s1 of policy π = (πt)t is formally given by Vs1(M, π, T ) = Eπ,M,s1 [ T∑ t=1 rt ] = Eπ,M,s1 [ T∑ t=1 m(st, at) ] = T∑ t=1 ( t−1∏ t′=1 pπt′mπt′ ) (s1) . (1) For π ∈ Π(M), the average-reward 1T Vs1(M, π, T ) tends to (pπm) (s1) as T → ∞. The gain of policy π ∈ Π(M), when starting from state s1 is defined by gπ(s1) = (pπm)(s1) and the optimal gain is defined as g⋆(s1) = maxπ∈Π(M) gπ(s1). Os(M) = {π ∈ Π : gπ(s) = g⋆(s)} is the set of policies achieving maximal gain on M starting from state s. Definition 1 (Regret). The regret at time T of a learning policy π = (πt)t starting at state s on an MDP M is defined with respect to any π⋆ ∈ Os (M), as Rπ,s (M, T ;π⋆) = Vs(M, π⋆, T )− Vs(M, π, T ) . (2) In this paper, we aim to find a learning algorithm with asymptotic minimal regret. The Lemma 1 will prove that for all optimal policies, π⋆, regrets are the same up to a bounded term that therefore does not count in asymptotic analysis. Some authors such as Bourel et al. [2020] define the regret as TgM(s) − Vs(M, π, T ) which is equal to the one we defined up to a bounded term (again by Lemma 1). No stationary policy can be optimal at all time and the important fact is that all those notions of regret induce the same asymptotic lower bound. In the considered setting, the learning agent interacts with the MDP without any reset. The minimal assumption would be to allow the agent to come back with positive probability from any initial 2We prefer the term ergodic over the more accurate one, irreducible as it is a standard abuse of terminology in the MDP community. Mathematically, an MDP is ergodic if both irreducible, aperiodic and positive recurrent. mistake in finite time, so that the agent is not stuck in a sub-optimal area of the system. This is assuming that the MDP is communicating, that is ∀s, s′,∃π, t ∈ N : ptπ(s′|s) > 0. However, in the literature, lower bounds on the regret are stated for MDPs satisfying a stronger assumption, ergodicity. Since one is interested in crafting an algorithm matching a lower bound, we consider this stronger assumption. Assumption 2 (Ergodic MDP). The MDP M is ergodic, that is ∀s, s′,∀π,∃t ∈ N : ptπ(s′|s) > 0. Intuitively, this means that for all policies and all couples of states, there exists a finite trajectory of positive probability between the states. Interestingly, the ergodic property can be assumed on the MDP or on the set of policies in which we seek an optimal one. For instance, in any communicating MDP all ε-soft policies3 are ergodic; more in the Experiment section 5 and Appendix E. Related work Had the MDP only one state, it would be a bandit problem. Lower bound on the bandit regret and algorithms matching this lower bound, sometimes up to a constant factor, are well studied in the bandit literature. Therefore, bandit sampling strategies with known theoretical guarantees have inspired RL algorithms. The KL-UCB algorithm (Burnetas and Katehakis [1996], Maillard et al. [2011]), has inspired the strategy of the seminal paper of Burnetas and Katehakis [1997], as well the more recent KL-UCRL strategy (Filippi et al. [2010] Talebi and Maillard [2018]). Inspired by the UCB algorithm (Agrawal [1995], Auer et al. [2002]), a number of strategies implementing the optimism principle have emerged such as UCRL (Auer and Ortner [2006]), UCRL2 (Jaksch et al. [2010]) and UCRL3 (Bourel et al. [2020] (and beyond, Azar et al. [2017], Dann et al. [2017] for the related episodic setup). The strategy PSRL (Osband et al. [2013]) is inspired by Thompson sampling (Thompson [1933]). Outline and contribution In this work, we build on the IMED strategy (Honda and Takemura [2015]), a bandit algorithm that benefits from practical and optimal guarantees but has never been used by the RL community. We fill this gap by proposing the IMED-RL algorithm which we prove to be asymptotically optimal for the average-reward criterion. We revisit the notion of skeleton (Equation 12) introduced in the seminal work of Burnetas and Katehakis [1997], with a subtle but key modification that prevents a prohibitive burn-in phase (see Appendix G for further details). Further, this novel notion of skeleton enables IMED-RL to remove any tracking or hyperparameter and mimic a stochastic-policy-iteration-like algorithm. 4 Further, this skeleton scales naturally with the studied MDP as it does not explicitly refer to absolute quantities such as the time. We prove that our proposed IMED-RL is asymptotically optimal and show its numerical competitivity. Building on IMED, we make an additional assumption on the reward that is less restrictive than the common bounded reward hypothesis made in the RL community. Assumption 3 (Semi-bounded rewards). For all x ∈ X , r(x) belongs to a subset Fx ⊂ P (R) known to the learner.5 There exists a known quantity mmax(x)∈R such that for all x ∈ X , the support Supp(r(x)) of the reward distribution is semi-bounded from above, Supp(r(x)) ⊂]−∞,mmax(x)], and its mean satisfies m(x) < mmax(x). Ergodic assumption While many recent works focused on worst-case regret bounds only (e.g. Domingues et al. [2021], Zanette and Brunskill [2019], Jin et al. [2018] and citations therein), studying problem-dependent optimal regret bounds has been somewhat overlooked. Being more general is always more appealing but the restriction from communicating MDPs to ergodic MDPs allows us to target exact asymptotic optimality ; not just bound, not just worst-case bound. Ergodic MDPs is the only case in which explicit problem-dependent lower bounds are known and hence can be directly used to build a strategy. Indeed, the main challenge towards problem-dependent optimality is that existing lower bounds for exploration problems in MDPs are usually written in terms of non-convex optimization problems. This implicit form makes it hard to understand the actual complexity of the setting and, thus, to design optimal algorithms. Existing proof strategies for state-of-the-art algorithms (UCRL, PSRL, etc) ensure a regret for communicating MDPs but fail to provide optimality guarantees even in the ergodic case. We believe that deriving a sharp result in the ergodic case 3A policy π : S → P(As) is ε-soft if π(a|s) ⩾ ε/|As| for all s and a. 4The skeleton in Burnetas and Katehakis [1997] is sometimes empty at some states, when t is too small, this causes the strategy to work well only after t is large enough to ensure that the skeleton contains at least one action in each state. 5e.g. Bernoulli, multinomial with unknown support, beta, truncated Gaussians, a mixture of those, etc. might prove to be insightful to pave the way towards the communicating case. From a theoretical standpoint, related to UCRL type strategy, modern analysis of KL-UCRL by Talebi and Maillard [2018] also makes the ergodic assumption. This hypothesis has also been used in the theoretical work of Tewari and Bartlett [2007] and the work of Ok et al. [2018] that concerns structured MDPs. Related to this assumption are works that are interested in identification and sample complexity. Wang [2017] introduced a primal-dual method to compute an ε-optimal policy and bound the number of sample transitions to reach this goal. Jin and Sidford [2020] relaxed the ergodic hypothesis by using a mixing hypothesis that implies the uniqueness of recurrent class for each policy. In this setting, the authors also derive a bound on the number of samples to compute an ε-optimal policy. 2 Regret lower bound In this section, we recall the regret lower bound for ergodic MDPs and provide a few insights about it. Characterizing optimal policies Relying on classical results that can be found in the books of Puterman [1994] and Hernández-Lerma and Lasserre [1996], we give a useful characterization of optimal policies that is used to derive a regret lower bound. Under the ergodic Assumption 2 of MDP M, for all policy π ∈ Π(M), the gain is independent from the initial state, i.e. gπ(s) = gπ(s′) for all states s and s′, and we denote it gπ . Similarly, the set of optimal policies O(M) is state-independent since Os(M) = Os′(M). Any policy π satisfy the following fixed point property (Poisson equation) gπ + bπ(s) = mπ(s) + (pπbπ)(s) , (3) where bπ : S → R is called the bias function and is defined up to an additive constant by bπ(s) =( ∞∑ t=1 (pt−1π − pπ)mπ ) (s). We highlight that bias plays a role similar to the value function in the discounted reward setting in which the gain is always zero and Equation 3 reduces to the Bellman equation, giving a direction in which extend our results to this other RL setting. Interestingly, for any communicating and a fortiori ergodic MDP, the span S(bπ) = max s∈S bπ(s)−min s∈S bπ(s) of the bias function of any policy is bounded, which allows to decompose the regret in the useful following way. Lemma 1 (Regret decomposition). Under the ergodic assumption 2, for all optimal policy ⋆ ∈ O(M), the regret of any policy π = (πt)t can be decomposed as Rπ,s1 (M, T ; ⋆) = ∑ x∈XM Eπ,s1 [Nx(T )]∆x (M) + ([ T∏ t=1 pπt − pt⋆ ] b⋆ ) (s1)︸ ︷︷ ︸ ⩽S(b⋆) , (4) where Ns,a(T ) = ∑T t=1 1 {st = s, at = a} counts the number of time the state-action pair (s, a) has been sampled and ∆s,a (M) is the sub-optimality gap of the state-action pair (s, a) in M, ∆s,a (M) = m (s, a) + pab⋆(s)−m⋆(s)− p⋆b⋆(s) = m (s, a) + pab⋆(s)− g⋆ − b⋆(s) (5) with pa = p(·|s, a) by a slight abuse of notation. Action a ∈ As is optimal if and only if ∆s,a (M) = 0, otherwise, it is said sub-optimal. This result can be found in Puterman [1994] and is rederived in Appendix C. Under the ergodic Assumption 2 of MDP M, all optimal policies satisfy a Poisson equation while some are also being characterized by the optimal Poisson equation (see Hernández-Lerma and Lasserre [1996]), used to compute the optimal gain and a bias function associated to an optimal policy, gM + bM(s) = max a∈As { m(s, a) + ∑ s′∈S p(s′|s, a)bM(s′) } . (6) Lower bound To assess the minimal sampling complexity of a sub-optimal state action pair, one must compute how far a sub-optimal state-action pair is from being optimal from an information point-of-view. A sub-optimal state-action pair (s, a) ∈ XM is said to be critical if it can be made optimal by changing reward r(s, a) and transition p (·|s, a) while respecting the assumptions on the rewards and transitions. Formally, let φM : P (R× S) → R, φM (ν ⊗ q) = ER∼ν [R] + qbM (7) denotes the potential function of ν ⊗ q in M, where ν ⊗ q is the product measure of ν and q. A pair (s, a) ∈ XM is critical if it is sub-optimal and there exists ν ∈ Fs,a and q ∈ P (S) such that φM (ν ⊗ q) > γs(M) where γs(M) def = gM + bM(s). (8) Note that γs(M) = max a∈As φM(r(s, a)⊗ p(s, a)) by the optimal Poisson equation (6). Definition 2 (Sub-optimality cost). The sub-optimality cost of a sub-optimal state-action pair (s, a) ∈ XM is defined as Ks,a (M) def = Ks,a (M, γs(M)) where Ks,a (M, γ) = inf ν∈Fs,a q∈P(S) {KL (r(s, a)⊗ p(·|s, a), ν ⊗ q) : φM (ν ⊗ q) > γ} , (9) and KL denotes the Kullback-Leibler divergence between distributions. A lower bound on the regret may now be stated for a certain class of learner, the set of uniformly consistent learning algorithm, i.e. those policies π = (πt)t such that Eπ,M (Ns,a(T )) = o (Tα) for all sub-optimal state-action pair (s, a) and 0 < α < 1 (see Agrawal et al. [1989]). Theorem 1 (Regret lower bound Burnetas and Katehakis [1997]). Let M = (S,A,p, r) be an MDP satisfying Assumptions 1, 2, 3. For all uniformly consistent learning algorithm π, lim inf T→∞ Eπ,M [Ns,a(T )] log T ⩾ 1 Ks,a (M) (10) with the convention that 1/∞ = 0. The regret lower bound is lim inf T→∞ Rπ (M, T ) log T ⩾ ∑ (s,a)∈C(M) ∆s,a (M) Ks,a (M) (11) where C (M) = { (s, a) : 0 < Ks,a (M) < ∞ } is called the set of critical state-action pairs. Those are the state-action pairs (s, a) that could be confused for an optimal one if we were to change their associated rewards and transitions distributions at the displacement cost of Ks,a (M). 3 The IMED-RL Algorithm In this section we introduce and detail the IMED-RL algorithm, whose regret matches this fundamental lower bound and extends the IMED strategy from Honda and Takemura [2015] to ergodic MDPs. Indeed, for a single-state MDP, that is a multi-armed bandit, IMED-RL simply reduces to IMED. Empirical quantities IMED-RL is a model-based algorithm that keeps empirical estimates of the transitions p and rewards r as opposed to model-free algorithm such as Q-learning. We denote by r̂t(s, a) = r̂(s, a;Ns,a(t)) and p̂t(s, a) = p̂(s, a;Ns,a(t)) the empirical reward distributions and transition vectors after t time steps, i.e. using Ns,a(t) samples from the distribution r(s, a). Initially, p̂(s, a; 0) is the uniform probability over the state space and p̂(s, a; k) = (1− 1/k)p̂(s, a; k − 1) + (1/k)sk, where sk is a vector of zeros except for a one at index sk, the kth samples drawn from p(·|s, a). This defines at each time step t an empirical MDP M̂t = (S,A, p̂t, r̂t). On this empirical MDP, for each state, some actions have been sampled more than others and their empirical quantities are therefore better estimated. We call skeleton at time t the subset of state-action pairs that can be considered sampled enough at time t; it is defined by restricting As to As(t) for all state s ∈ S , with As(t) = { a ∈ As : Ns,a(t) ⩾ log2 ( max a′∈As Nsa′(t) )} . (12) Since x> log2 x, As(t) ̸= ∅, hence A(t) = (As(t))s contains at least one deterministic policy. We note that the MDP M(A(t)) def= (S,A(t),p, r) defined by restricting the set of actions to A(t) ⊆ A is an ergodic MDP. The restricted empirical MDP M̂t(A(t)) def = (S,A(t), p̂t, r̂t) also is ergodic thanks to the ergodic initialization of the estimate p̂. Inspired by IMED, we define the IMED-RL index. Definition 3 (IMED-RL index). For all state-action pairs (s, a) ∈ XM, let us define Ks,a(t) def = Ks,a ( M̂t(A(t)), γ̂s(t) ) with empirical threshold γ̂s(t) def = max a∈As φM̂t(A(t)) (r̂(s, a)⊗ p̂(s, a)) Then, the IMED-RL index of (s, a) at time t, Hs,a(t), is defined as Hs,a(t) = Ns,a(t)Ks,a(t) + logNs,a(t) . (13) Note that γ̂s(t) ̸= γs(M̂t(A(t))) as the maximum is taken over all a∈As an not just a∈As(t). Known support of transitions Were the support of transition known, the infimum in sub-optimality cost Ks,a defined by Equation 9 would be redefined as one over the set {q∈P (S) : Supp(q) = Supp (p (·|s, a))}, modifying both the lower bound and IMED-RL index. IMED-RL algorithm The IMED-RL algorithm consists in playing at each time step t, an action at of minimal IMED-RL index at the current state st. The intuition behind the IMED-RL index is similar to the one of the IMED index for bandits and stems from an information theoretic point-of-view of the lower bound. At a given time t, the frequency of play Ns,a(t)Ns(t) of action a ∈ As in state s ∈ S, should be larger than or equal to its posterior probability of being the optimal action in that state, exp (−Ns,a(t)Ks,a (t)), that is to say Ns,a(t)Ns(t) ⩾ exp (−Ns,a(t)Ks,a (t)). Taking the logarithm and rearranging the terms, this condition rewrites Hs,a(t) ⩾ logNs(t) at each time step t. The action that is the closest to violate this condition or that violates this condition the most is the one of minimal IMED-RL index, argmina Hs,a(t), the one IMED-RL decides to play. Algorithm 1 IMED-RL: Indexed Minimum Empirical Divergence for Reinforcement Learning Require: State-Action space XM of MDP M, Assumptions 1, 2, 3 Require: Initial state s1 for t ⩾ 1 do Sample at ∈ arg min a∈Ast Hs,a(t) end for Intuitions of the IMED-RL algorithm root to the control theory of MDPs and optimal bandit theory; IMED-RL intertwines the two and the regret proof exactly follows from the following intuitions. Control In control theory, we assume that both the expected rewards and transitions probabilities of an MDP M are known. Policy iteration (see Puterman [1994], Bertsekas and Shreve [1978]) is an algorithm that computes a sequence (πn)n of deterministic policies that are increasingly strictly better until an optimal policy is reached. In the average-reward setting and under the ergodic assumption, a policy π is strictly better than another policy π′ if gπ (M) > gπ′ (M). The policy iteration algorithm computes the sequence of policies recursively in the following way. Initially, an arbitrary deterministic policy π0 is chosen. At step n + 1 ∈ N∗, it computes mπn and bπn then swipes through the states s ∈ S in an arbitrary order until it reaches one state s such that there exists a ∈ A(s) with m(s, a) + p(·|s, a)bπn > mπn(s) + pπ(s)bπn . If such an s does not exist, then it returns πn as an optimal policy. Otherwise, πn+1 is defined as πn+1(s′) = πn (s′) for all s ̸= s′ and πn+1(s) ∈ argmax {m(s, a) + p(·|s, a)bπn}. Such a step is called a policy improvement step. Policy iteration is guaranteed to finish in a finite number as the cardinal of Π(M) is finite. At each step n ∈ N∗, φM(πn) is a local function that takes into account the whole dynamic of the MDP and allows to compute, via an argmax, an optimal choice of improvement (or optimal action) based on local information; φM(πn)(r(s, a) ⊗ p(·|s, a)) = m(s, a) + p(s, a)bπn . IMED-RL uses φM̂(A(t)) and improves the skeleton similarly to policy iteration as it can be seen in the analysis 4. Bandit control A degenerate case of MDP would be one where there is only one state s with φM(φ) (r(s, a)) = m(s, a) by choosing the bias function to be zero6. Playing optimally consists in playing an action with largest expected reward at each time step t, at ∈ argmaxa∈As m(s, a). Bandit Learning occurs when rewards are unknown; this is the bandit problem. In that case, a lower bound on the regret similar to 1 exists. Under some assumptions on the reward distributions, optimal algorithms whose regret upper bounds asymptotically match the lower bound can derived. IMED Honda and Takemura [2015], KL-UCB Maillard et al. [2011], Cappé et al. [2013] are two such examples that use indexes, i.e. computes a number Is,a(t) at each time step and play at ∈ argmin Is,a(t). Such indexes are crafted to correctly handle the exploration-exploitation trade-off. RL in Ergodic MDPs The delayed rewards caused by the dynamic of the system is the main source of difficulty arising from having more than one state. IMED-RL combines control and bandit theory 6recall that the bias function is defined up to an additive constant in the following way. At each time step t, a restricted MDP M̂t(A(t)) is built from the empirical one M̂t. If the condition to belong to the skeleton is selective enough, then the potentials on the restricted empirical MDP M̂t(A(t)) may become close to those of the restricted true MDP M(A(t)), that is ∥φ M̂t(A(t)) − φM(A(t))∥∞ is small. We want to make policy improvements by finding, at each state s an action a′ ∈ argmaxφM(A(t))(r(s, a)⊗p(·|s, a)), play it enough that it belongs to the skeleton which will modify φ and repeat until φM(A(t)) = φM. Using φ, the global dynamic is reduced to a local function so that at each state, the agent is presented a bandit problem. This bandit problem is well estimated if ∥φ M̂t(A(t)) − φM(A(t))∥∞ is small. As opposed to the control setting, the learning agent cannot choose the state in which to make the policy improvement step and it may be possible that no policy improvement step is possible at state st. However, thanks to the ergodic assumption 2 the agent is guaranteed to visit such a state in finite time, if it exists. There is a trade-off between the adptativity of the skeleton, i.e. how quickly one can add an improving action to define a new φ, and concentration of statistical quantities defined on the restricted MDP. Related work Our notion of skeleton is built on the work of Burnetas and Katehakis [1997]. We improve on their original notion of skeleton by correcting some troubles happening in the small sample regime. In particular, this forces the authors to introduce some forcing mechanism. The issues of the original definition and improvement induced by ours are listed in Appendix G. One key point of our definition is that the skeleton is defined using only empirical quantities, the number of samples, and does not depends on some arbitrary reference, such as the absolute time. 4 Regret of IMED-RL In this section we state the main theoretical result of this paper, which consists in the IMED-RL regret upper bound. We then sketch a few key ingredients of the proof. Theorem 2 (Regret upper bound for Ergodic MDPs). Let M = (S,A,p, r) be an MDP satisfying assumptions 1, 2, 3. Let 0 < ε ⩽ 13 minπ∈Π(M) min (s,a)∈XM {|∆s,a (M(π)) | : |∆s,a (M(π)) | > 0}. The regret of IMED-RL is upper bounded, RIMED-RL (M, T ) ⩽ ∑ (s,a)∈C(M) ∆s,a (M) Ks,a (M)− εΓs (M) log T +O(1), (14) where Γs (M) is constant that depends on the MDP M and state s; it is made explicit in the proof detailed in Appendix D. A Taylor expansion allows to write the regret upper bound as RIMED-RL (M, T ) ⩽ ∑ (s,a)∈C(M) ∆s,a (M) Ks,a (M) log T +O ((log T )10/11) . (15) Were the semi-bounded reward assumption changed to a bounded reward one with known upper and lower bound, the O ( (log T ) 10/11 ) could be made a O(1) as explained in Appendix E. Theorem 3 (Asymptotic Optimality). IMED-RL is asymptotically optimal, that is, lim T→+∞ RIMED-RL (M, T ) log T ⩽ ∑ (s,a)∈C(M) ∆s,a (M) Ks,a (M) . (16) The proof of Theorem 3 is immediate from Theorem 2 by first dividing Equation 14 by log T , then by taking the limit T → ∞, and finally taking the limit ε → 0. Remark While the regret lower bound, Theorem 1, is asymptotic by nature, our main Theorem 2 states a finite time upper bound on the regret of IMED-RL. Indeed, both Equations 14 and 15 are valid for all time T . The term O(1) appearing in Equation 14 does not depend on time T and is a constant that depends on both the MDP M and ε. This dependency is hard to be made explicit as this term is computed as limits of convergent series that are derived in the proof, see Appendix D. In Equation 14, the constant ∑ (s,a)∈C(M) ∆s,a(M) Ks,a(M)−εΓs(M) in front of log T does not exactly match the asymptotic upper bound ∑ (s,a)∈C(M) ∆s,a(M) Ks,a(M) because of the ε-term in the denominators. Equation 15 states that using a bounded reward hypothesis, instead of semi-bounded, allows the constant in front of the leading log T term to exactly match the asymptotic one, even in the finite time regret upper bound. In both cases, Theorem 3 states that asymptotic optimality is achieved. This Theorem proves the optimality of IMED-RL since the upper bound on the regret matches the lower bound of Theorem 1. Such a bound was asymptotically matched by the algorithm proposed by Burnetas and Katehakis [1997] and we recall that this algorithm and its problems are discussed in Appendix G. On the other hand, the current state-of-the-art algorithms UCRL3 and PSRL, while having some theoretical guarantees, have not been proved to match the regret lower bound. On the practical side, Q-learning is often used without much theoretical guarantee because of its usually strong practical performances. In the experiments, we will compare IMED-RL to those three algorithms. Related work Theorems 2 and 3 prove that IMED-RL is achieving the optimal rate of exploration (in the exploitation-exploration tradeoff sense) for ergodic MDPs. Its theoretical guarantees are problem-dependent rather than worst-case/min-max. Comparing to the log T bound derived for UCRL in Theorem 4 of Jaksch et al. [2010], less known than the √ T bound, shows the benefit of our analysis for each instance, as we improve the constant factors in the leading terms: their dependency is 34D2S2A/∆, where ∆ is a sub-optimality gap and D the diameter of the MDP. Sketch of proof Though a full proof is given in Appendix D, we sketch here the main proof ideas that follow directly from the intuitions behind the IMED-RL conception. The regret is decomposed into two terms, the bandit term when the local bandit problems defined by φ M̂t(A(t)) is well estimated, and the skeleton improvement term that controls the probability that the local bandit problem is not well estimated. This second term is managed by controlling the number of policy improvement steps and using concentration properties of empirical quantities defined on the skeleton. The main Theorem 2 follows from the following proposition that is proved in Appendix D. Recall from Lemma 1 that for all state-action pair x ∈ XM, Nx(T ) = ∑T t=1 1 {(st, at) = x} counts the number of time the state-action pair x has been sampled. Proposition 1. For all state-action pair x ∈ XM, for all ε > 0, Nx(t) ⩽ Bx(T ) + S(T ), (17) where we introduced the bandit term, Bx(T ), and the skeleton improvement term, S(T ), Bx(T ) = T∑ t=1 1 { xt = x,O ( M̂t (A(t)) ) ⊆ O (M) , ∥bM̂t(A(t)) − bM∥∞ ⩽ ε } , (18) S(T ) = T∑ t=1 1 { O ( M̂t (A(t)) ) ⊆ O (M) , ∥bM̂t(A(t)) − bM∥∞ ⩽ ε } . (19) Furthermore, E (S(T )) = O(1), E (Bx(T )) = O(1) for a non-critical state-action pair, while for a critical state-action pair x, E (Bx(T )) ⩽ ∆x (M) Kx (M)− εΓs (M) log T +O(1) 5 Numerical experiments In this section, we discuss the practical implementation and numerical aspects of IMED-RL and extend the discussion in Appendix F. Source code is available on github7. Computing IMED-RL index At each time step, we run the value iteration algorithm on M̂t(A(t)) to compute the optimal bias and the associated potential function φ M̂t(A(t)). This task is standard. Once done, one must compute the value of the optimization problem Ks,a (t) which belongs to the category of convex optimization problem with linear constraint. Such problems have been studied 7Plain text URL is https://github.com/fabienpesquerel/IMED-RL under the name of partially-finite convex optimization, e.g. in Borwein and Lewis [1991]. It is possible to compute Ks,a (t) by considering the Legendre-Fenchel dual and one does not need to compute the optimal distribution to know the value of the optimization problem. Proposition 2 (Index computation, Honda and Takemura [2015] Theorem 2). Let (s, a) be in XM, M = mmax(s, a) + max s′∈S bM(s), and γ > φM(r(s, a)⊗ p(·|s, a)), then Ks,a (M, γ) = { max 0⩽x⩽ 1M−γ E R∼r(s,a) S∼p(·|s,a) [ log ( 1− ( R+ bM(S)− γ ) x )] if M > γ +∞ otherwise . (20) If γ ⩽ φM(r(s, a)⊗ p(·|s, a)), then Ks,a (M, γ) = 0. In particular, this Proposition 2 sometimes allows to write Ks,a (t) almost in close form, e.g. when Fs,a defined in Asumptions 3 is a set of multinomials with unknown support (and only the upper bound mmax is known). In Appendix F, we discuss this numerical computation further. Computational complexity In terms of state and actions spaces sizes, the complexity of IMED-RL at each time step scales as O(S2A), the complexity of value iteration. Indeed, at each time step, IMED-RL runs value iteration using actions available in the skeleton, then computes the indexes of the available actions at the current state, and finally pick an argmin. The complexity of value iteration is O(S2A), the complexity of computing the A necessary indexes is O(SA), and the complexity of picking an argmin amongst those A indexes is O(A). Therefore, the per-time-step complexity of IMED-RL scales as O(S2A). However, this scaling is mainly an upper-bound as value iteration is run with actions that are within the skeleton. By design of the skeleton, we experimentally observe that, after some time, the skeleton contains one action per state (the optimal one). We provide more details in Appendix F, Lazy update paragraph. Practical comparison In practice, most of the complexity of IMED-RL is in the analysis rather than in the algorithm: compared to PSRL and UCRL3, IMED-RL does not take a confidence parameter nor any hyperparameter. Also, IMED-RL uses value iteration as a routine, which is faster than the extended value iteration used in UCRL3. Q-learning technically takes an exploration parameter (ε-greedy exploration) or exploration scheme when it is slowly decreased with time. Environments In different environments, we illustrate in Figure 2 and Figure 3 the performance of IMED-RL against the strategies UCRL3 Bourel et al. [2020], PSRL Osband et al. [2013] and Qlearning (run with discount γ = 0.99 and optimistic initialization). As stated during the introduction, any finite communicating MDP can be turned into an ergodic one, since on such MDPs, any stochastic policy π : S → P (As) with full support Supp (π(s)) = As is ergodic. Hence by mixing its transition p with that obtained from playing a uniform policy, formally pε(·|s, a) = (1− ε)p(·|s, a) + ε ∑ a′∈As p(·|s, a′)/|As|, for an arbitrarily small ε > 0 one obtain an ergodic MDP. In the experiments, we consider an ergodic version of the classical n-state river-swim environment, 2-room and 4-room with ε = 10−3, and classical communicating versions (ε = 0). n-states RiverSwim environment As illustrated by Figure 2, the performances of IMED-RL are particularly good and the regret of IMED-RL is below the regrets of all its competitors, even when the MDP is communicating only. This numerical performance grounds numerically the previous theoretical analysis. While using IMED-RL in communicating MDPs is not endorsed by our theoretically analysis, it is interesting to see how much this hypothesis amounts in the numerical performances of IMED-RL. We therefore ran an experiment on another classical environment, 2-rooms. n-rooms environment As illustrated by Figure 3, the performances of IMED-RL are particularly good, even surprisingly good, in this communicating only environment. Those experiments are a clue that the IMED-RL strategy may still be reasonable, although not necessarily optimal in some communicating MDPs. All experiments take less than an hour to run on a standard CPU. Future work Although not intended for non-ergodic MDPs, IMED-RL exhibits state-of-the-art numerical performances in communicating only MDPs (see Appendix F.2 for additional experiments). Hence, IMED-RL might prove to be insightful to pave the way towards the communicating case as it seems possible to get a controlled regret also in the case of communicating MDPs. Both the problem-dependent and worst-case regret bounds are interesting in this regard. Another direction we intend to explore is the adaptation of IMED-RL main ideas to function approximation frameworks, such as neural networks and kernel methods. Conclusion In this paper, we introduced IMED-RL, a numerically efficient algorithm to solve the average-reward criterion problem under the ergodic assumption for which we derive an upper bound on the regret matching the known regret lower bound. Further, its surprisingly good numerical performances in communicating only MDPs open the path to future work in MDPs that are communicating only. Acknowledgments and Disclosure of Funding This work has been supported by the French Ministry of Higher Education and Research, Inria, Scool, the Hauts-de-France region, the MEL and the I-Site ULNE regarding project R-PILOTE-19-004APPRENF. The PhD of Fabien Pesquerel is supported by a grant from École Normale Supérieure.
1. What is the focus of the paper regarding regret minimization in MDPs? 2. What are the strengths of the proposed algorithm, particularly in terms of its theoretical guarantees and empirical performance? 3. What are the weaknesses of the paper regarding its presentation and comparison with other works? 4. How does the reviewer suggest comparing the problem-dependent lower bound with the T-type minimax lower bound? 5. Do you have any concerns about the finite time regret or the extension to the communicating setting? 6. How do the two regret definitions used in the paper and UCRL differ, and how can they be compared? 7. Is there a direct way to extend the algorithm to linear function approximation?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies regret minimization in infinite-horizon average-reward MDPs. They propose a new algorithm based on Indexed Minimum Empirical Divergence (IMED) bandit algorithm, and show that the new algorithm achieves a regret matching the asymptotic problem-dependent lower bounds. Strengths And Weaknesses Strength: the paper is clearly written. The proposed algorithm has strong theoretical guarantee and competitive empirical performance. Weakness: Many important details are hidden in the Appendix (for example, the limitation of existing work is deferred to Appendix G). Thus, the contribution of this paper is not clear at first glance compared to existing works. I think more highlight s on the Questions How should I compare the asymptotic problem-dependent lower bound with the T -type minimax lower bound? Does the algorithm also ensures a T finite time regret? What's the difficulty of extending current analysis to the communicating setting? The regret definition is different from the one used in UCRL paper: R T = ∑ t ( g π ⋆ − r t ) . How should I compare these two types of definitions? Do you see a direct way to extend to linear function approximation? Limitations See "Strengths And Weaknesses"
NIPS
Title IMED-RL: Regret optimal learning of ergodic Markov decision processes Abstract We consider reinforcement learning in a discrete, undiscounted, infinite-horizon Markov Decision Problem (MDP) under the average reward criterion, and focus on the minimization of the regret with respect to an optimal policy, when the learner does not know the rewards nor the transitions of the MDP. In light of their success at regret minimization in multi-armed bandits, popular bandit strategies, such as the optimistic UCB, KL-UCB or the Bayesian Thompson sampling strategy, have been extended to the MDP setup. Despite some key successes, existing strategies for solving this problem either fail to be provably asymptotically optimal, or suffer from prohibitive burn-in phase and computational complexity when implemented in practice. In this work, we shed a novel light on regret minimization strategies, by extending to reinforcement learning the computationally appealing Indexed Minimum Empirical Divergence (IMED) bandit algorithm. Traditional asymptotic problem-dependent lower bounds on the regret are known under the assumption that the MDP is ergodic. Under this assumption, we introduce IMED-RL and prove that its regret upper bound asymptotically matches the regret lower bound. We discuss both the case when the supports of transitions are unknown, and the more informative but a priori harder-to-exploit-optimally case when they are known. Rewards are assumed light-tailed, semi-bounded from above. Last, we provide numerical illustrations on classical tabular MDPs, ergodic and communicating only, showing the competitiveness of IMED-RL in finite-time against state-of-the-art algorithms. IMED-RL also benefits from a light complexity. 1 Introduction We study Reinforcement Learning (RL) with an unknown finite Markov Decision Problem (MDP) under the average-reward criterion in which a learning algorithm interacts sequentially with the dynamical system, without any reset, in a single and infinite sequence of observations, actions, and rewards while trying to maximize its total accumulated rewards over time. Formally, we consider a finite MDP M = (S,A,p, r) where S is the finite set of states, A = (As)s∈S specifies the set of actions available in each state and we introduce the set of pairs XM = {(s, a) : s ∈ S, a ∈ As} for convenience. Further1, p : XM → P(S) is the transition distribution function and r : XM → P(R) the reward distribution function, with corresponding mean reward function denoted by m : XM → R. An agent interacts with the MDP at discrete time steps t ∈ N∗ and yields a random sequence (st, at, rt)t of states, actions, and rewards in the following way. At each time step t, the agent observes the current state st and decides the action at to take based on st and possibly past information, i.e. previous elements of the sequence. After playing at, it observes a reward rt ∼ r (st, at), the current state of the MDP changes to st+1 ∼ p (·|st, at) and the agent proceeds sequentially. In the average- 1Given a set E, P (E) denotes the set of probability distributions on E. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). reward setting, one is interested in maximizing the limit, 1T ∑T t=1 rt, when T → ∞, providing it exists. This setting is a popular framework for studying sequential decision making problems; it can be traced back to seminal papers such as those of Graves and Lai [1997] and Burnetas and Katehakis [1997] This theoretical framework allows to study the exploration-exploitation trade-off that arises from the sequential optimization problem a learner is trying to solve while being uncertain about the very problem it is optimizing. In this paper, one is interested in developing a sampling strategy that is optimal amongst strategies that aim at maximizing the average-reward, i.e. balancing exploration and exploitation in an optimal way. To assert optimality, we define the notion of regret and state a regret lower bound with the purpose of defining a theoretically sound notion of optimality that is problem-dependent. While regret defines the discrepancy to optimality of a learning strategy, a problem-dependent regret lower bound will formally assess the minimal regret that any learning algorithm must incur on a given MDP problem by computing a minimal rate of exploration. Because this minimal rate of exploration depends on the problem, it is said to be problem-dependent, as opposed to worst case regret study that can exist in the MDP literature (e.g. Jaksch et al. [2010]). Regret lower bounds currently exist in the literature when the MDP M is assumed to be ergodic2. Hence we hereafter make this assumption, in order to be able to compare the regret of our algorithm to an optimal bound. Similarly, to ensure fast enough convergence of the empirical estimate of the reward to the true mean, an assumption controlling the rate of convergence to the mean is necessary. Assumption 1 (Light-tail rewards). For all x ∈ XM, the moment generating function of the reward exists in a neighborhood of 0: ∃λx> 0,∀λ ∈ R such that |λ| < λx,ER∼r(x)[exp(λR)] < ∞. Policy Regret and ergodicity are defined using properties of the set of stationary deterministic policies Π(M) on M. On M, each stationary deterministic policy π : S → As defines a Markov reward process, i.e. a Markov chain on S with kernel pπ : s ∈ S 7→ p (·|s, π(s)) ∈ P (S) together with rewards rπ : s ∈ S 7→ r (s, π(s)) ∈ P (R) and associated mean rewards mπ : s ∈ S 7→ m (s, π(s)) ∈ R. The t-steps transition kernel of π on M is denoted ptπ. We denote pπ= lim T→∞ 1 T T∑ t=1 pt−1π : S → P(S) the Cesaro-average of pπ . A learning agent is executing a sequence of policies πt∈Π(M), t⩾1, where πt depends on past information (st′ , at′ , rt′)t′<t. With a slight abuse of notation, a sequence of identical decision rules, πt = π for all t, is also denoted π. Gain The cumulative reward (value) at time T , starting from an initial state s1 of policy π = (πt)t is formally given by Vs1(M, π, T ) = Eπ,M,s1 [ T∑ t=1 rt ] = Eπ,M,s1 [ T∑ t=1 m(st, at) ] = T∑ t=1 ( t−1∏ t′=1 pπt′mπt′ ) (s1) . (1) For π ∈ Π(M), the average-reward 1T Vs1(M, π, T ) tends to (pπm) (s1) as T → ∞. The gain of policy π ∈ Π(M), when starting from state s1 is defined by gπ(s1) = (pπm)(s1) and the optimal gain is defined as g⋆(s1) = maxπ∈Π(M) gπ(s1). Os(M) = {π ∈ Π : gπ(s) = g⋆(s)} is the set of policies achieving maximal gain on M starting from state s. Definition 1 (Regret). The regret at time T of a learning policy π = (πt)t starting at state s on an MDP M is defined with respect to any π⋆ ∈ Os (M), as Rπ,s (M, T ;π⋆) = Vs(M, π⋆, T )− Vs(M, π, T ) . (2) In this paper, we aim to find a learning algorithm with asymptotic minimal regret. The Lemma 1 will prove that for all optimal policies, π⋆, regrets are the same up to a bounded term that therefore does not count in asymptotic analysis. Some authors such as Bourel et al. [2020] define the regret as TgM(s) − Vs(M, π, T ) which is equal to the one we defined up to a bounded term (again by Lemma 1). No stationary policy can be optimal at all time and the important fact is that all those notions of regret induce the same asymptotic lower bound. In the considered setting, the learning agent interacts with the MDP without any reset. The minimal assumption would be to allow the agent to come back with positive probability from any initial 2We prefer the term ergodic over the more accurate one, irreducible as it is a standard abuse of terminology in the MDP community. Mathematically, an MDP is ergodic if both irreducible, aperiodic and positive recurrent. mistake in finite time, so that the agent is not stuck in a sub-optimal area of the system. This is assuming that the MDP is communicating, that is ∀s, s′,∃π, t ∈ N : ptπ(s′|s) > 0. However, in the literature, lower bounds on the regret are stated for MDPs satisfying a stronger assumption, ergodicity. Since one is interested in crafting an algorithm matching a lower bound, we consider this stronger assumption. Assumption 2 (Ergodic MDP). The MDP M is ergodic, that is ∀s, s′,∀π,∃t ∈ N : ptπ(s′|s) > 0. Intuitively, this means that for all policies and all couples of states, there exists a finite trajectory of positive probability between the states. Interestingly, the ergodic property can be assumed on the MDP or on the set of policies in which we seek an optimal one. For instance, in any communicating MDP all ε-soft policies3 are ergodic; more in the Experiment section 5 and Appendix E. Related work Had the MDP only one state, it would be a bandit problem. Lower bound on the bandit regret and algorithms matching this lower bound, sometimes up to a constant factor, are well studied in the bandit literature. Therefore, bandit sampling strategies with known theoretical guarantees have inspired RL algorithms. The KL-UCB algorithm (Burnetas and Katehakis [1996], Maillard et al. [2011]), has inspired the strategy of the seminal paper of Burnetas and Katehakis [1997], as well the more recent KL-UCRL strategy (Filippi et al. [2010] Talebi and Maillard [2018]). Inspired by the UCB algorithm (Agrawal [1995], Auer et al. [2002]), a number of strategies implementing the optimism principle have emerged such as UCRL (Auer and Ortner [2006]), UCRL2 (Jaksch et al. [2010]) and UCRL3 (Bourel et al. [2020] (and beyond, Azar et al. [2017], Dann et al. [2017] for the related episodic setup). The strategy PSRL (Osband et al. [2013]) is inspired by Thompson sampling (Thompson [1933]). Outline and contribution In this work, we build on the IMED strategy (Honda and Takemura [2015]), a bandit algorithm that benefits from practical and optimal guarantees but has never been used by the RL community. We fill this gap by proposing the IMED-RL algorithm which we prove to be asymptotically optimal for the average-reward criterion. We revisit the notion of skeleton (Equation 12) introduced in the seminal work of Burnetas and Katehakis [1997], with a subtle but key modification that prevents a prohibitive burn-in phase (see Appendix G for further details). Further, this novel notion of skeleton enables IMED-RL to remove any tracking or hyperparameter and mimic a stochastic-policy-iteration-like algorithm. 4 Further, this skeleton scales naturally with the studied MDP as it does not explicitly refer to absolute quantities such as the time. We prove that our proposed IMED-RL is asymptotically optimal and show its numerical competitivity. Building on IMED, we make an additional assumption on the reward that is less restrictive than the common bounded reward hypothesis made in the RL community. Assumption 3 (Semi-bounded rewards). For all x ∈ X , r(x) belongs to a subset Fx ⊂ P (R) known to the learner.5 There exists a known quantity mmax(x)∈R such that for all x ∈ X , the support Supp(r(x)) of the reward distribution is semi-bounded from above, Supp(r(x)) ⊂]−∞,mmax(x)], and its mean satisfies m(x) < mmax(x). Ergodic assumption While many recent works focused on worst-case regret bounds only (e.g. Domingues et al. [2021], Zanette and Brunskill [2019], Jin et al. [2018] and citations therein), studying problem-dependent optimal regret bounds has been somewhat overlooked. Being more general is always more appealing but the restriction from communicating MDPs to ergodic MDPs allows us to target exact asymptotic optimality ; not just bound, not just worst-case bound. Ergodic MDPs is the only case in which explicit problem-dependent lower bounds are known and hence can be directly used to build a strategy. Indeed, the main challenge towards problem-dependent optimality is that existing lower bounds for exploration problems in MDPs are usually written in terms of non-convex optimization problems. This implicit form makes it hard to understand the actual complexity of the setting and, thus, to design optimal algorithms. Existing proof strategies for state-of-the-art algorithms (UCRL, PSRL, etc) ensure a regret for communicating MDPs but fail to provide optimality guarantees even in the ergodic case. We believe that deriving a sharp result in the ergodic case 3A policy π : S → P(As) is ε-soft if π(a|s) ⩾ ε/|As| for all s and a. 4The skeleton in Burnetas and Katehakis [1997] is sometimes empty at some states, when t is too small, this causes the strategy to work well only after t is large enough to ensure that the skeleton contains at least one action in each state. 5e.g. Bernoulli, multinomial with unknown support, beta, truncated Gaussians, a mixture of those, etc. might prove to be insightful to pave the way towards the communicating case. From a theoretical standpoint, related to UCRL type strategy, modern analysis of KL-UCRL by Talebi and Maillard [2018] also makes the ergodic assumption. This hypothesis has also been used in the theoretical work of Tewari and Bartlett [2007] and the work of Ok et al. [2018] that concerns structured MDPs. Related to this assumption are works that are interested in identification and sample complexity. Wang [2017] introduced a primal-dual method to compute an ε-optimal policy and bound the number of sample transitions to reach this goal. Jin and Sidford [2020] relaxed the ergodic hypothesis by using a mixing hypothesis that implies the uniqueness of recurrent class for each policy. In this setting, the authors also derive a bound on the number of samples to compute an ε-optimal policy. 2 Regret lower bound In this section, we recall the regret lower bound for ergodic MDPs and provide a few insights about it. Characterizing optimal policies Relying on classical results that can be found in the books of Puterman [1994] and Hernández-Lerma and Lasserre [1996], we give a useful characterization of optimal policies that is used to derive a regret lower bound. Under the ergodic Assumption 2 of MDP M, for all policy π ∈ Π(M), the gain is independent from the initial state, i.e. gπ(s) = gπ(s′) for all states s and s′, and we denote it gπ . Similarly, the set of optimal policies O(M) is state-independent since Os(M) = Os′(M). Any policy π satisfy the following fixed point property (Poisson equation) gπ + bπ(s) = mπ(s) + (pπbπ)(s) , (3) where bπ : S → R is called the bias function and is defined up to an additive constant by bπ(s) =( ∞∑ t=1 (pt−1π − pπ)mπ ) (s). We highlight that bias plays a role similar to the value function in the discounted reward setting in which the gain is always zero and Equation 3 reduces to the Bellman equation, giving a direction in which extend our results to this other RL setting. Interestingly, for any communicating and a fortiori ergodic MDP, the span S(bπ) = max s∈S bπ(s)−min s∈S bπ(s) of the bias function of any policy is bounded, which allows to decompose the regret in the useful following way. Lemma 1 (Regret decomposition). Under the ergodic assumption 2, for all optimal policy ⋆ ∈ O(M), the regret of any policy π = (πt)t can be decomposed as Rπ,s1 (M, T ; ⋆) = ∑ x∈XM Eπ,s1 [Nx(T )]∆x (M) + ([ T∏ t=1 pπt − pt⋆ ] b⋆ ) (s1)︸ ︷︷ ︸ ⩽S(b⋆) , (4) where Ns,a(T ) = ∑T t=1 1 {st = s, at = a} counts the number of time the state-action pair (s, a) has been sampled and ∆s,a (M) is the sub-optimality gap of the state-action pair (s, a) in M, ∆s,a (M) = m (s, a) + pab⋆(s)−m⋆(s)− p⋆b⋆(s) = m (s, a) + pab⋆(s)− g⋆ − b⋆(s) (5) with pa = p(·|s, a) by a slight abuse of notation. Action a ∈ As is optimal if and only if ∆s,a (M) = 0, otherwise, it is said sub-optimal. This result can be found in Puterman [1994] and is rederived in Appendix C. Under the ergodic Assumption 2 of MDP M, all optimal policies satisfy a Poisson equation while some are also being characterized by the optimal Poisson equation (see Hernández-Lerma and Lasserre [1996]), used to compute the optimal gain and a bias function associated to an optimal policy, gM + bM(s) = max a∈As { m(s, a) + ∑ s′∈S p(s′|s, a)bM(s′) } . (6) Lower bound To assess the minimal sampling complexity of a sub-optimal state action pair, one must compute how far a sub-optimal state-action pair is from being optimal from an information point-of-view. A sub-optimal state-action pair (s, a) ∈ XM is said to be critical if it can be made optimal by changing reward r(s, a) and transition p (·|s, a) while respecting the assumptions on the rewards and transitions. Formally, let φM : P (R× S) → R, φM (ν ⊗ q) = ER∼ν [R] + qbM (7) denotes the potential function of ν ⊗ q in M, where ν ⊗ q is the product measure of ν and q. A pair (s, a) ∈ XM is critical if it is sub-optimal and there exists ν ∈ Fs,a and q ∈ P (S) such that φM (ν ⊗ q) > γs(M) where γs(M) def = gM + bM(s). (8) Note that γs(M) = max a∈As φM(r(s, a)⊗ p(s, a)) by the optimal Poisson equation (6). Definition 2 (Sub-optimality cost). The sub-optimality cost of a sub-optimal state-action pair (s, a) ∈ XM is defined as Ks,a (M) def = Ks,a (M, γs(M)) where Ks,a (M, γ) = inf ν∈Fs,a q∈P(S) {KL (r(s, a)⊗ p(·|s, a), ν ⊗ q) : φM (ν ⊗ q) > γ} , (9) and KL denotes the Kullback-Leibler divergence between distributions. A lower bound on the regret may now be stated for a certain class of learner, the set of uniformly consistent learning algorithm, i.e. those policies π = (πt)t such that Eπ,M (Ns,a(T )) = o (Tα) for all sub-optimal state-action pair (s, a) and 0 < α < 1 (see Agrawal et al. [1989]). Theorem 1 (Regret lower bound Burnetas and Katehakis [1997]). Let M = (S,A,p, r) be an MDP satisfying Assumptions 1, 2, 3. For all uniformly consistent learning algorithm π, lim inf T→∞ Eπ,M [Ns,a(T )] log T ⩾ 1 Ks,a (M) (10) with the convention that 1/∞ = 0. The regret lower bound is lim inf T→∞ Rπ (M, T ) log T ⩾ ∑ (s,a)∈C(M) ∆s,a (M) Ks,a (M) (11) where C (M) = { (s, a) : 0 < Ks,a (M) < ∞ } is called the set of critical state-action pairs. Those are the state-action pairs (s, a) that could be confused for an optimal one if we were to change their associated rewards and transitions distributions at the displacement cost of Ks,a (M). 3 The IMED-RL Algorithm In this section we introduce and detail the IMED-RL algorithm, whose regret matches this fundamental lower bound and extends the IMED strategy from Honda and Takemura [2015] to ergodic MDPs. Indeed, for a single-state MDP, that is a multi-armed bandit, IMED-RL simply reduces to IMED. Empirical quantities IMED-RL is a model-based algorithm that keeps empirical estimates of the transitions p and rewards r as opposed to model-free algorithm such as Q-learning. We denote by r̂t(s, a) = r̂(s, a;Ns,a(t)) and p̂t(s, a) = p̂(s, a;Ns,a(t)) the empirical reward distributions and transition vectors after t time steps, i.e. using Ns,a(t) samples from the distribution r(s, a). Initially, p̂(s, a; 0) is the uniform probability over the state space and p̂(s, a; k) = (1− 1/k)p̂(s, a; k − 1) + (1/k)sk, where sk is a vector of zeros except for a one at index sk, the kth samples drawn from p(·|s, a). This defines at each time step t an empirical MDP M̂t = (S,A, p̂t, r̂t). On this empirical MDP, for each state, some actions have been sampled more than others and their empirical quantities are therefore better estimated. We call skeleton at time t the subset of state-action pairs that can be considered sampled enough at time t; it is defined by restricting As to As(t) for all state s ∈ S , with As(t) = { a ∈ As : Ns,a(t) ⩾ log2 ( max a′∈As Nsa′(t) )} . (12) Since x> log2 x, As(t) ̸= ∅, hence A(t) = (As(t))s contains at least one deterministic policy. We note that the MDP M(A(t)) def= (S,A(t),p, r) defined by restricting the set of actions to A(t) ⊆ A is an ergodic MDP. The restricted empirical MDP M̂t(A(t)) def = (S,A(t), p̂t, r̂t) also is ergodic thanks to the ergodic initialization of the estimate p̂. Inspired by IMED, we define the IMED-RL index. Definition 3 (IMED-RL index). For all state-action pairs (s, a) ∈ XM, let us define Ks,a(t) def = Ks,a ( M̂t(A(t)), γ̂s(t) ) with empirical threshold γ̂s(t) def = max a∈As φM̂t(A(t)) (r̂(s, a)⊗ p̂(s, a)) Then, the IMED-RL index of (s, a) at time t, Hs,a(t), is defined as Hs,a(t) = Ns,a(t)Ks,a(t) + logNs,a(t) . (13) Note that γ̂s(t) ̸= γs(M̂t(A(t))) as the maximum is taken over all a∈As an not just a∈As(t). Known support of transitions Were the support of transition known, the infimum in sub-optimality cost Ks,a defined by Equation 9 would be redefined as one over the set {q∈P (S) : Supp(q) = Supp (p (·|s, a))}, modifying both the lower bound and IMED-RL index. IMED-RL algorithm The IMED-RL algorithm consists in playing at each time step t, an action at of minimal IMED-RL index at the current state st. The intuition behind the IMED-RL index is similar to the one of the IMED index for bandits and stems from an information theoretic point-of-view of the lower bound. At a given time t, the frequency of play Ns,a(t)Ns(t) of action a ∈ As in state s ∈ S, should be larger than or equal to its posterior probability of being the optimal action in that state, exp (−Ns,a(t)Ks,a (t)), that is to say Ns,a(t)Ns(t) ⩾ exp (−Ns,a(t)Ks,a (t)). Taking the logarithm and rearranging the terms, this condition rewrites Hs,a(t) ⩾ logNs(t) at each time step t. The action that is the closest to violate this condition or that violates this condition the most is the one of minimal IMED-RL index, argmina Hs,a(t), the one IMED-RL decides to play. Algorithm 1 IMED-RL: Indexed Minimum Empirical Divergence for Reinforcement Learning Require: State-Action space XM of MDP M, Assumptions 1, 2, 3 Require: Initial state s1 for t ⩾ 1 do Sample at ∈ arg min a∈Ast Hs,a(t) end for Intuitions of the IMED-RL algorithm root to the control theory of MDPs and optimal bandit theory; IMED-RL intertwines the two and the regret proof exactly follows from the following intuitions. Control In control theory, we assume that both the expected rewards and transitions probabilities of an MDP M are known. Policy iteration (see Puterman [1994], Bertsekas and Shreve [1978]) is an algorithm that computes a sequence (πn)n of deterministic policies that are increasingly strictly better until an optimal policy is reached. In the average-reward setting and under the ergodic assumption, a policy π is strictly better than another policy π′ if gπ (M) > gπ′ (M). The policy iteration algorithm computes the sequence of policies recursively in the following way. Initially, an arbitrary deterministic policy π0 is chosen. At step n + 1 ∈ N∗, it computes mπn and bπn then swipes through the states s ∈ S in an arbitrary order until it reaches one state s such that there exists a ∈ A(s) with m(s, a) + p(·|s, a)bπn > mπn(s) + pπ(s)bπn . If such an s does not exist, then it returns πn as an optimal policy. Otherwise, πn+1 is defined as πn+1(s′) = πn (s′) for all s ̸= s′ and πn+1(s) ∈ argmax {m(s, a) + p(·|s, a)bπn}. Such a step is called a policy improvement step. Policy iteration is guaranteed to finish in a finite number as the cardinal of Π(M) is finite. At each step n ∈ N∗, φM(πn) is a local function that takes into account the whole dynamic of the MDP and allows to compute, via an argmax, an optimal choice of improvement (or optimal action) based on local information; φM(πn)(r(s, a) ⊗ p(·|s, a)) = m(s, a) + p(s, a)bπn . IMED-RL uses φM̂(A(t)) and improves the skeleton similarly to policy iteration as it can be seen in the analysis 4. Bandit control A degenerate case of MDP would be one where there is only one state s with φM(φ) (r(s, a)) = m(s, a) by choosing the bias function to be zero6. Playing optimally consists in playing an action with largest expected reward at each time step t, at ∈ argmaxa∈As m(s, a). Bandit Learning occurs when rewards are unknown; this is the bandit problem. In that case, a lower bound on the regret similar to 1 exists. Under some assumptions on the reward distributions, optimal algorithms whose regret upper bounds asymptotically match the lower bound can derived. IMED Honda and Takemura [2015], KL-UCB Maillard et al. [2011], Cappé et al. [2013] are two such examples that use indexes, i.e. computes a number Is,a(t) at each time step and play at ∈ argmin Is,a(t). Such indexes are crafted to correctly handle the exploration-exploitation trade-off. RL in Ergodic MDPs The delayed rewards caused by the dynamic of the system is the main source of difficulty arising from having more than one state. IMED-RL combines control and bandit theory 6recall that the bias function is defined up to an additive constant in the following way. At each time step t, a restricted MDP M̂t(A(t)) is built from the empirical one M̂t. If the condition to belong to the skeleton is selective enough, then the potentials on the restricted empirical MDP M̂t(A(t)) may become close to those of the restricted true MDP M(A(t)), that is ∥φ M̂t(A(t)) − φM(A(t))∥∞ is small. We want to make policy improvements by finding, at each state s an action a′ ∈ argmaxφM(A(t))(r(s, a)⊗p(·|s, a)), play it enough that it belongs to the skeleton which will modify φ and repeat until φM(A(t)) = φM. Using φ, the global dynamic is reduced to a local function so that at each state, the agent is presented a bandit problem. This bandit problem is well estimated if ∥φ M̂t(A(t)) − φM(A(t))∥∞ is small. As opposed to the control setting, the learning agent cannot choose the state in which to make the policy improvement step and it may be possible that no policy improvement step is possible at state st. However, thanks to the ergodic assumption 2 the agent is guaranteed to visit such a state in finite time, if it exists. There is a trade-off between the adptativity of the skeleton, i.e. how quickly one can add an improving action to define a new φ, and concentration of statistical quantities defined on the restricted MDP. Related work Our notion of skeleton is built on the work of Burnetas and Katehakis [1997]. We improve on their original notion of skeleton by correcting some troubles happening in the small sample regime. In particular, this forces the authors to introduce some forcing mechanism. The issues of the original definition and improvement induced by ours are listed in Appendix G. One key point of our definition is that the skeleton is defined using only empirical quantities, the number of samples, and does not depends on some arbitrary reference, such as the absolute time. 4 Regret of IMED-RL In this section we state the main theoretical result of this paper, which consists in the IMED-RL regret upper bound. We then sketch a few key ingredients of the proof. Theorem 2 (Regret upper bound for Ergodic MDPs). Let M = (S,A,p, r) be an MDP satisfying assumptions 1, 2, 3. Let 0 < ε ⩽ 13 minπ∈Π(M) min (s,a)∈XM {|∆s,a (M(π)) | : |∆s,a (M(π)) | > 0}. The regret of IMED-RL is upper bounded, RIMED-RL (M, T ) ⩽ ∑ (s,a)∈C(M) ∆s,a (M) Ks,a (M)− εΓs (M) log T +O(1), (14) where Γs (M) is constant that depends on the MDP M and state s; it is made explicit in the proof detailed in Appendix D. A Taylor expansion allows to write the regret upper bound as RIMED-RL (M, T ) ⩽ ∑ (s,a)∈C(M) ∆s,a (M) Ks,a (M) log T +O ((log T )10/11) . (15) Were the semi-bounded reward assumption changed to a bounded reward one with known upper and lower bound, the O ( (log T ) 10/11 ) could be made a O(1) as explained in Appendix E. Theorem 3 (Asymptotic Optimality). IMED-RL is asymptotically optimal, that is, lim T→+∞ RIMED-RL (M, T ) log T ⩽ ∑ (s,a)∈C(M) ∆s,a (M) Ks,a (M) . (16) The proof of Theorem 3 is immediate from Theorem 2 by first dividing Equation 14 by log T , then by taking the limit T → ∞, and finally taking the limit ε → 0. Remark While the regret lower bound, Theorem 1, is asymptotic by nature, our main Theorem 2 states a finite time upper bound on the regret of IMED-RL. Indeed, both Equations 14 and 15 are valid for all time T . The term O(1) appearing in Equation 14 does not depend on time T and is a constant that depends on both the MDP M and ε. This dependency is hard to be made explicit as this term is computed as limits of convergent series that are derived in the proof, see Appendix D. In Equation 14, the constant ∑ (s,a)∈C(M) ∆s,a(M) Ks,a(M)−εΓs(M) in front of log T does not exactly match the asymptotic upper bound ∑ (s,a)∈C(M) ∆s,a(M) Ks,a(M) because of the ε-term in the denominators. Equation 15 states that using a bounded reward hypothesis, instead of semi-bounded, allows the constant in front of the leading log T term to exactly match the asymptotic one, even in the finite time regret upper bound. In both cases, Theorem 3 states that asymptotic optimality is achieved. This Theorem proves the optimality of IMED-RL since the upper bound on the regret matches the lower bound of Theorem 1. Such a bound was asymptotically matched by the algorithm proposed by Burnetas and Katehakis [1997] and we recall that this algorithm and its problems are discussed in Appendix G. On the other hand, the current state-of-the-art algorithms UCRL3 and PSRL, while having some theoretical guarantees, have not been proved to match the regret lower bound. On the practical side, Q-learning is often used without much theoretical guarantee because of its usually strong practical performances. In the experiments, we will compare IMED-RL to those three algorithms. Related work Theorems 2 and 3 prove that IMED-RL is achieving the optimal rate of exploration (in the exploitation-exploration tradeoff sense) for ergodic MDPs. Its theoretical guarantees are problem-dependent rather than worst-case/min-max. Comparing to the log T bound derived for UCRL in Theorem 4 of Jaksch et al. [2010], less known than the √ T bound, shows the benefit of our analysis for each instance, as we improve the constant factors in the leading terms: their dependency is 34D2S2A/∆, where ∆ is a sub-optimality gap and D the diameter of the MDP. Sketch of proof Though a full proof is given in Appendix D, we sketch here the main proof ideas that follow directly from the intuitions behind the IMED-RL conception. The regret is decomposed into two terms, the bandit term when the local bandit problems defined by φ M̂t(A(t)) is well estimated, and the skeleton improvement term that controls the probability that the local bandit problem is not well estimated. This second term is managed by controlling the number of policy improvement steps and using concentration properties of empirical quantities defined on the skeleton. The main Theorem 2 follows from the following proposition that is proved in Appendix D. Recall from Lemma 1 that for all state-action pair x ∈ XM, Nx(T ) = ∑T t=1 1 {(st, at) = x} counts the number of time the state-action pair x has been sampled. Proposition 1. For all state-action pair x ∈ XM, for all ε > 0, Nx(t) ⩽ Bx(T ) + S(T ), (17) where we introduced the bandit term, Bx(T ), and the skeleton improvement term, S(T ), Bx(T ) = T∑ t=1 1 { xt = x,O ( M̂t (A(t)) ) ⊆ O (M) , ∥bM̂t(A(t)) − bM∥∞ ⩽ ε } , (18) S(T ) = T∑ t=1 1 { O ( M̂t (A(t)) ) ⊆ O (M) , ∥bM̂t(A(t)) − bM∥∞ ⩽ ε } . (19) Furthermore, E (S(T )) = O(1), E (Bx(T )) = O(1) for a non-critical state-action pair, while for a critical state-action pair x, E (Bx(T )) ⩽ ∆x (M) Kx (M)− εΓs (M) log T +O(1) 5 Numerical experiments In this section, we discuss the practical implementation and numerical aspects of IMED-RL and extend the discussion in Appendix F. Source code is available on github7. Computing IMED-RL index At each time step, we run the value iteration algorithm on M̂t(A(t)) to compute the optimal bias and the associated potential function φ M̂t(A(t)). This task is standard. Once done, one must compute the value of the optimization problem Ks,a (t) which belongs to the category of convex optimization problem with linear constraint. Such problems have been studied 7Plain text URL is https://github.com/fabienpesquerel/IMED-RL under the name of partially-finite convex optimization, e.g. in Borwein and Lewis [1991]. It is possible to compute Ks,a (t) by considering the Legendre-Fenchel dual and one does not need to compute the optimal distribution to know the value of the optimization problem. Proposition 2 (Index computation, Honda and Takemura [2015] Theorem 2). Let (s, a) be in XM, M = mmax(s, a) + max s′∈S bM(s), and γ > φM(r(s, a)⊗ p(·|s, a)), then Ks,a (M, γ) = { max 0⩽x⩽ 1M−γ E R∼r(s,a) S∼p(·|s,a) [ log ( 1− ( R+ bM(S)− γ ) x )] if M > γ +∞ otherwise . (20) If γ ⩽ φM(r(s, a)⊗ p(·|s, a)), then Ks,a (M, γ) = 0. In particular, this Proposition 2 sometimes allows to write Ks,a (t) almost in close form, e.g. when Fs,a defined in Asumptions 3 is a set of multinomials with unknown support (and only the upper bound mmax is known). In Appendix F, we discuss this numerical computation further. Computational complexity In terms of state and actions spaces sizes, the complexity of IMED-RL at each time step scales as O(S2A), the complexity of value iteration. Indeed, at each time step, IMED-RL runs value iteration using actions available in the skeleton, then computes the indexes of the available actions at the current state, and finally pick an argmin. The complexity of value iteration is O(S2A), the complexity of computing the A necessary indexes is O(SA), and the complexity of picking an argmin amongst those A indexes is O(A). Therefore, the per-time-step complexity of IMED-RL scales as O(S2A). However, this scaling is mainly an upper-bound as value iteration is run with actions that are within the skeleton. By design of the skeleton, we experimentally observe that, after some time, the skeleton contains one action per state (the optimal one). We provide more details in Appendix F, Lazy update paragraph. Practical comparison In practice, most of the complexity of IMED-RL is in the analysis rather than in the algorithm: compared to PSRL and UCRL3, IMED-RL does not take a confidence parameter nor any hyperparameter. Also, IMED-RL uses value iteration as a routine, which is faster than the extended value iteration used in UCRL3. Q-learning technically takes an exploration parameter (ε-greedy exploration) or exploration scheme when it is slowly decreased with time. Environments In different environments, we illustrate in Figure 2 and Figure 3 the performance of IMED-RL against the strategies UCRL3 Bourel et al. [2020], PSRL Osband et al. [2013] and Qlearning (run with discount γ = 0.99 and optimistic initialization). As stated during the introduction, any finite communicating MDP can be turned into an ergodic one, since on such MDPs, any stochastic policy π : S → P (As) with full support Supp (π(s)) = As is ergodic. Hence by mixing its transition p with that obtained from playing a uniform policy, formally pε(·|s, a) = (1− ε)p(·|s, a) + ε ∑ a′∈As p(·|s, a′)/|As|, for an arbitrarily small ε > 0 one obtain an ergodic MDP. In the experiments, we consider an ergodic version of the classical n-state river-swim environment, 2-room and 4-room with ε = 10−3, and classical communicating versions (ε = 0). n-states RiverSwim environment As illustrated by Figure 2, the performances of IMED-RL are particularly good and the regret of IMED-RL is below the regrets of all its competitors, even when the MDP is communicating only. This numerical performance grounds numerically the previous theoretical analysis. While using IMED-RL in communicating MDPs is not endorsed by our theoretically analysis, it is interesting to see how much this hypothesis amounts in the numerical performances of IMED-RL. We therefore ran an experiment on another classical environment, 2-rooms. n-rooms environment As illustrated by Figure 3, the performances of IMED-RL are particularly good, even surprisingly good, in this communicating only environment. Those experiments are a clue that the IMED-RL strategy may still be reasonable, although not necessarily optimal in some communicating MDPs. All experiments take less than an hour to run on a standard CPU. Future work Although not intended for non-ergodic MDPs, IMED-RL exhibits state-of-the-art numerical performances in communicating only MDPs (see Appendix F.2 for additional experiments). Hence, IMED-RL might prove to be insightful to pave the way towards the communicating case as it seems possible to get a controlled regret also in the case of communicating MDPs. Both the problem-dependent and worst-case regret bounds are interesting in this regard. Another direction we intend to explore is the adaptation of IMED-RL main ideas to function approximation frameworks, such as neural networks and kernel methods. Conclusion In this paper, we introduced IMED-RL, a numerically efficient algorithm to solve the average-reward criterion problem under the ergodic assumption for which we derive an upper bound on the regret matching the known regret lower bound. Further, its surprisingly good numerical performances in communicating only MDPs open the path to future work in MDPs that are communicating only. Acknowledgments and Disclosure of Funding This work has been supported by the French Ministry of Higher Education and Research, Inria, Scool, the Hauts-de-France region, the MEL and the I-Site ULNE regarding project R-PILOTE-19-004APPRENF. The PhD of Fabien Pesquerel is supported by a grant from École Normale Supérieure.
1. What is the focus and contribution of the paper regarding ergodic MDPs? 2. What are the strengths of the proposed policy, particularly its uniqueness and regret bounds? 3. What are the weaknesses of the paper, especially regarding the strong ergodic assumption? 4. Do you have any questions or suggestions regarding the applicability of the IMED-RL index to communicating MDPs? 5. Are there any connections between the IMED-RL index and optimistic values?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper considers ergodic MDPs and proposed an index policy IMED-RL. The IMED-RL index is built on the IMED policy for multi-armed stochastic bandits. Regret bound for IMED-RL is provided and shown to match the lower bound for ergodic RL problems. Strengths And Weaknesses Strengths: The proposed policy is based on the IMED-RL index which is new for RL and different from commonly used optimism based algorithms. Regret bounds of IMED-RL are provided for ergodic MDPs and the upper bound and lower bound match. Numerical experiments show better performance of IMED-RL compared with prior algorithms with certain regret guarantees. Weaknesses: The ergodic assumption is very strong. Most MDPs including RiverSwim in the numerical experiment are not ergodic. The ergodic assumption is not typical and there are many RL algorithms with performance guarantees without the ergodic assumption. Results requiring the ergodic assumption seem very limited. The provided justification for focusing on ergodic MDPs is that it's the only class of MDPs with regret lower bounds. However, in Jaksch et al. [2010b], has a regret lower bound for the much more general class of communicating MDPs. Minor: the definitions of several notations are hard to find, making the paper not easy to follow. For examples, the definition of the count number N_{s,a}(T) is hidden in the statement of Lemma 1, and the definition of \phi_M is a bit hidden within lines. Questions It looks like one could compute the IMED-RL index for communicating MDPs. Would it be possible to apply IMED-RL to communicating MDPs with (possibly weaker) regret bounds? Are there any connection between the IMED-RL index and some optimistic values? Limitations The assumptions and limitations are clearly stated in the paper.
NIPS
Title IMED-RL: Regret optimal learning of ergodic Markov decision processes Abstract We consider reinforcement learning in a discrete, undiscounted, infinite-horizon Markov Decision Problem (MDP) under the average reward criterion, and focus on the minimization of the regret with respect to an optimal policy, when the learner does not know the rewards nor the transitions of the MDP. In light of their success at regret minimization in multi-armed bandits, popular bandit strategies, such as the optimistic UCB, KL-UCB or the Bayesian Thompson sampling strategy, have been extended to the MDP setup. Despite some key successes, existing strategies for solving this problem either fail to be provably asymptotically optimal, or suffer from prohibitive burn-in phase and computational complexity when implemented in practice. In this work, we shed a novel light on regret minimization strategies, by extending to reinforcement learning the computationally appealing Indexed Minimum Empirical Divergence (IMED) bandit algorithm. Traditional asymptotic problem-dependent lower bounds on the regret are known under the assumption that the MDP is ergodic. Under this assumption, we introduce IMED-RL and prove that its regret upper bound asymptotically matches the regret lower bound. We discuss both the case when the supports of transitions are unknown, and the more informative but a priori harder-to-exploit-optimally case when they are known. Rewards are assumed light-tailed, semi-bounded from above. Last, we provide numerical illustrations on classical tabular MDPs, ergodic and communicating only, showing the competitiveness of IMED-RL in finite-time against state-of-the-art algorithms. IMED-RL also benefits from a light complexity. 1 Introduction We study Reinforcement Learning (RL) with an unknown finite Markov Decision Problem (MDP) under the average-reward criterion in which a learning algorithm interacts sequentially with the dynamical system, without any reset, in a single and infinite sequence of observations, actions, and rewards while trying to maximize its total accumulated rewards over time. Formally, we consider a finite MDP M = (S,A,p, r) where S is the finite set of states, A = (As)s∈S specifies the set of actions available in each state and we introduce the set of pairs XM = {(s, a) : s ∈ S, a ∈ As} for convenience. Further1, p : XM → P(S) is the transition distribution function and r : XM → P(R) the reward distribution function, with corresponding mean reward function denoted by m : XM → R. An agent interacts with the MDP at discrete time steps t ∈ N∗ and yields a random sequence (st, at, rt)t of states, actions, and rewards in the following way. At each time step t, the agent observes the current state st and decides the action at to take based on st and possibly past information, i.e. previous elements of the sequence. After playing at, it observes a reward rt ∼ r (st, at), the current state of the MDP changes to st+1 ∼ p (·|st, at) and the agent proceeds sequentially. In the average- 1Given a set E, P (E) denotes the set of probability distributions on E. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). reward setting, one is interested in maximizing the limit, 1T ∑T t=1 rt, when T → ∞, providing it exists. This setting is a popular framework for studying sequential decision making problems; it can be traced back to seminal papers such as those of Graves and Lai [1997] and Burnetas and Katehakis [1997] This theoretical framework allows to study the exploration-exploitation trade-off that arises from the sequential optimization problem a learner is trying to solve while being uncertain about the very problem it is optimizing. In this paper, one is interested in developing a sampling strategy that is optimal amongst strategies that aim at maximizing the average-reward, i.e. balancing exploration and exploitation in an optimal way. To assert optimality, we define the notion of regret and state a regret lower bound with the purpose of defining a theoretically sound notion of optimality that is problem-dependent. While regret defines the discrepancy to optimality of a learning strategy, a problem-dependent regret lower bound will formally assess the minimal regret that any learning algorithm must incur on a given MDP problem by computing a minimal rate of exploration. Because this minimal rate of exploration depends on the problem, it is said to be problem-dependent, as opposed to worst case regret study that can exist in the MDP literature (e.g. Jaksch et al. [2010]). Regret lower bounds currently exist in the literature when the MDP M is assumed to be ergodic2. Hence we hereafter make this assumption, in order to be able to compare the regret of our algorithm to an optimal bound. Similarly, to ensure fast enough convergence of the empirical estimate of the reward to the true mean, an assumption controlling the rate of convergence to the mean is necessary. Assumption 1 (Light-tail rewards). For all x ∈ XM, the moment generating function of the reward exists in a neighborhood of 0: ∃λx> 0,∀λ ∈ R such that |λ| < λx,ER∼r(x)[exp(λR)] < ∞. Policy Regret and ergodicity are defined using properties of the set of stationary deterministic policies Π(M) on M. On M, each stationary deterministic policy π : S → As defines a Markov reward process, i.e. a Markov chain on S with kernel pπ : s ∈ S 7→ p (·|s, π(s)) ∈ P (S) together with rewards rπ : s ∈ S 7→ r (s, π(s)) ∈ P (R) and associated mean rewards mπ : s ∈ S 7→ m (s, π(s)) ∈ R. The t-steps transition kernel of π on M is denoted ptπ. We denote pπ= lim T→∞ 1 T T∑ t=1 pt−1π : S → P(S) the Cesaro-average of pπ . A learning agent is executing a sequence of policies πt∈Π(M), t⩾1, where πt depends on past information (st′ , at′ , rt′)t′<t. With a slight abuse of notation, a sequence of identical decision rules, πt = π for all t, is also denoted π. Gain The cumulative reward (value) at time T , starting from an initial state s1 of policy π = (πt)t is formally given by Vs1(M, π, T ) = Eπ,M,s1 [ T∑ t=1 rt ] = Eπ,M,s1 [ T∑ t=1 m(st, at) ] = T∑ t=1 ( t−1∏ t′=1 pπt′mπt′ ) (s1) . (1) For π ∈ Π(M), the average-reward 1T Vs1(M, π, T ) tends to (pπm) (s1) as T → ∞. The gain of policy π ∈ Π(M), when starting from state s1 is defined by gπ(s1) = (pπm)(s1) and the optimal gain is defined as g⋆(s1) = maxπ∈Π(M) gπ(s1). Os(M) = {π ∈ Π : gπ(s) = g⋆(s)} is the set of policies achieving maximal gain on M starting from state s. Definition 1 (Regret). The regret at time T of a learning policy π = (πt)t starting at state s on an MDP M is defined with respect to any π⋆ ∈ Os (M), as Rπ,s (M, T ;π⋆) = Vs(M, π⋆, T )− Vs(M, π, T ) . (2) In this paper, we aim to find a learning algorithm with asymptotic minimal regret. The Lemma 1 will prove that for all optimal policies, π⋆, regrets are the same up to a bounded term that therefore does not count in asymptotic analysis. Some authors such as Bourel et al. [2020] define the regret as TgM(s) − Vs(M, π, T ) which is equal to the one we defined up to a bounded term (again by Lemma 1). No stationary policy can be optimal at all time and the important fact is that all those notions of regret induce the same asymptotic lower bound. In the considered setting, the learning agent interacts with the MDP without any reset. The minimal assumption would be to allow the agent to come back with positive probability from any initial 2We prefer the term ergodic over the more accurate one, irreducible as it is a standard abuse of terminology in the MDP community. Mathematically, an MDP is ergodic if both irreducible, aperiodic and positive recurrent. mistake in finite time, so that the agent is not stuck in a sub-optimal area of the system. This is assuming that the MDP is communicating, that is ∀s, s′,∃π, t ∈ N : ptπ(s′|s) > 0. However, in the literature, lower bounds on the regret are stated for MDPs satisfying a stronger assumption, ergodicity. Since one is interested in crafting an algorithm matching a lower bound, we consider this stronger assumption. Assumption 2 (Ergodic MDP). The MDP M is ergodic, that is ∀s, s′,∀π,∃t ∈ N : ptπ(s′|s) > 0. Intuitively, this means that for all policies and all couples of states, there exists a finite trajectory of positive probability between the states. Interestingly, the ergodic property can be assumed on the MDP or on the set of policies in which we seek an optimal one. For instance, in any communicating MDP all ε-soft policies3 are ergodic; more in the Experiment section 5 and Appendix E. Related work Had the MDP only one state, it would be a bandit problem. Lower bound on the bandit regret and algorithms matching this lower bound, sometimes up to a constant factor, are well studied in the bandit literature. Therefore, bandit sampling strategies with known theoretical guarantees have inspired RL algorithms. The KL-UCB algorithm (Burnetas and Katehakis [1996], Maillard et al. [2011]), has inspired the strategy of the seminal paper of Burnetas and Katehakis [1997], as well the more recent KL-UCRL strategy (Filippi et al. [2010] Talebi and Maillard [2018]). Inspired by the UCB algorithm (Agrawal [1995], Auer et al. [2002]), a number of strategies implementing the optimism principle have emerged such as UCRL (Auer and Ortner [2006]), UCRL2 (Jaksch et al. [2010]) and UCRL3 (Bourel et al. [2020] (and beyond, Azar et al. [2017], Dann et al. [2017] for the related episodic setup). The strategy PSRL (Osband et al. [2013]) is inspired by Thompson sampling (Thompson [1933]). Outline and contribution In this work, we build on the IMED strategy (Honda and Takemura [2015]), a bandit algorithm that benefits from practical and optimal guarantees but has never been used by the RL community. We fill this gap by proposing the IMED-RL algorithm which we prove to be asymptotically optimal for the average-reward criterion. We revisit the notion of skeleton (Equation 12) introduced in the seminal work of Burnetas and Katehakis [1997], with a subtle but key modification that prevents a prohibitive burn-in phase (see Appendix G for further details). Further, this novel notion of skeleton enables IMED-RL to remove any tracking or hyperparameter and mimic a stochastic-policy-iteration-like algorithm. 4 Further, this skeleton scales naturally with the studied MDP as it does not explicitly refer to absolute quantities such as the time. We prove that our proposed IMED-RL is asymptotically optimal and show its numerical competitivity. Building on IMED, we make an additional assumption on the reward that is less restrictive than the common bounded reward hypothesis made in the RL community. Assumption 3 (Semi-bounded rewards). For all x ∈ X , r(x) belongs to a subset Fx ⊂ P (R) known to the learner.5 There exists a known quantity mmax(x)∈R such that for all x ∈ X , the support Supp(r(x)) of the reward distribution is semi-bounded from above, Supp(r(x)) ⊂]−∞,mmax(x)], and its mean satisfies m(x) < mmax(x). Ergodic assumption While many recent works focused on worst-case regret bounds only (e.g. Domingues et al. [2021], Zanette and Brunskill [2019], Jin et al. [2018] and citations therein), studying problem-dependent optimal regret bounds has been somewhat overlooked. Being more general is always more appealing but the restriction from communicating MDPs to ergodic MDPs allows us to target exact asymptotic optimality ; not just bound, not just worst-case bound. Ergodic MDPs is the only case in which explicit problem-dependent lower bounds are known and hence can be directly used to build a strategy. Indeed, the main challenge towards problem-dependent optimality is that existing lower bounds for exploration problems in MDPs are usually written in terms of non-convex optimization problems. This implicit form makes it hard to understand the actual complexity of the setting and, thus, to design optimal algorithms. Existing proof strategies for state-of-the-art algorithms (UCRL, PSRL, etc) ensure a regret for communicating MDPs but fail to provide optimality guarantees even in the ergodic case. We believe that deriving a sharp result in the ergodic case 3A policy π : S → P(As) is ε-soft if π(a|s) ⩾ ε/|As| for all s and a. 4The skeleton in Burnetas and Katehakis [1997] is sometimes empty at some states, when t is too small, this causes the strategy to work well only after t is large enough to ensure that the skeleton contains at least one action in each state. 5e.g. Bernoulli, multinomial with unknown support, beta, truncated Gaussians, a mixture of those, etc. might prove to be insightful to pave the way towards the communicating case. From a theoretical standpoint, related to UCRL type strategy, modern analysis of KL-UCRL by Talebi and Maillard [2018] also makes the ergodic assumption. This hypothesis has also been used in the theoretical work of Tewari and Bartlett [2007] and the work of Ok et al. [2018] that concerns structured MDPs. Related to this assumption are works that are interested in identification and sample complexity. Wang [2017] introduced a primal-dual method to compute an ε-optimal policy and bound the number of sample transitions to reach this goal. Jin and Sidford [2020] relaxed the ergodic hypothesis by using a mixing hypothesis that implies the uniqueness of recurrent class for each policy. In this setting, the authors also derive a bound on the number of samples to compute an ε-optimal policy. 2 Regret lower bound In this section, we recall the regret lower bound for ergodic MDPs and provide a few insights about it. Characterizing optimal policies Relying on classical results that can be found in the books of Puterman [1994] and Hernández-Lerma and Lasserre [1996], we give a useful characterization of optimal policies that is used to derive a regret lower bound. Under the ergodic Assumption 2 of MDP M, for all policy π ∈ Π(M), the gain is independent from the initial state, i.e. gπ(s) = gπ(s′) for all states s and s′, and we denote it gπ . Similarly, the set of optimal policies O(M) is state-independent since Os(M) = Os′(M). Any policy π satisfy the following fixed point property (Poisson equation) gπ + bπ(s) = mπ(s) + (pπbπ)(s) , (3) where bπ : S → R is called the bias function and is defined up to an additive constant by bπ(s) =( ∞∑ t=1 (pt−1π − pπ)mπ ) (s). We highlight that bias plays a role similar to the value function in the discounted reward setting in which the gain is always zero and Equation 3 reduces to the Bellman equation, giving a direction in which extend our results to this other RL setting. Interestingly, for any communicating and a fortiori ergodic MDP, the span S(bπ) = max s∈S bπ(s)−min s∈S bπ(s) of the bias function of any policy is bounded, which allows to decompose the regret in the useful following way. Lemma 1 (Regret decomposition). Under the ergodic assumption 2, for all optimal policy ⋆ ∈ O(M), the regret of any policy π = (πt)t can be decomposed as Rπ,s1 (M, T ; ⋆) = ∑ x∈XM Eπ,s1 [Nx(T )]∆x (M) + ([ T∏ t=1 pπt − pt⋆ ] b⋆ ) (s1)︸ ︷︷ ︸ ⩽S(b⋆) , (4) where Ns,a(T ) = ∑T t=1 1 {st = s, at = a} counts the number of time the state-action pair (s, a) has been sampled and ∆s,a (M) is the sub-optimality gap of the state-action pair (s, a) in M, ∆s,a (M) = m (s, a) + pab⋆(s)−m⋆(s)− p⋆b⋆(s) = m (s, a) + pab⋆(s)− g⋆ − b⋆(s) (5) with pa = p(·|s, a) by a slight abuse of notation. Action a ∈ As is optimal if and only if ∆s,a (M) = 0, otherwise, it is said sub-optimal. This result can be found in Puterman [1994] and is rederived in Appendix C. Under the ergodic Assumption 2 of MDP M, all optimal policies satisfy a Poisson equation while some are also being characterized by the optimal Poisson equation (see Hernández-Lerma and Lasserre [1996]), used to compute the optimal gain and a bias function associated to an optimal policy, gM + bM(s) = max a∈As { m(s, a) + ∑ s′∈S p(s′|s, a)bM(s′) } . (6) Lower bound To assess the minimal sampling complexity of a sub-optimal state action pair, one must compute how far a sub-optimal state-action pair is from being optimal from an information point-of-view. A sub-optimal state-action pair (s, a) ∈ XM is said to be critical if it can be made optimal by changing reward r(s, a) and transition p (·|s, a) while respecting the assumptions on the rewards and transitions. Formally, let φM : P (R× S) → R, φM (ν ⊗ q) = ER∼ν [R] + qbM (7) denotes the potential function of ν ⊗ q in M, where ν ⊗ q is the product measure of ν and q. A pair (s, a) ∈ XM is critical if it is sub-optimal and there exists ν ∈ Fs,a and q ∈ P (S) such that φM (ν ⊗ q) > γs(M) where γs(M) def = gM + bM(s). (8) Note that γs(M) = max a∈As φM(r(s, a)⊗ p(s, a)) by the optimal Poisson equation (6). Definition 2 (Sub-optimality cost). The sub-optimality cost of a sub-optimal state-action pair (s, a) ∈ XM is defined as Ks,a (M) def = Ks,a (M, γs(M)) where Ks,a (M, γ) = inf ν∈Fs,a q∈P(S) {KL (r(s, a)⊗ p(·|s, a), ν ⊗ q) : φM (ν ⊗ q) > γ} , (9) and KL denotes the Kullback-Leibler divergence between distributions. A lower bound on the regret may now be stated for a certain class of learner, the set of uniformly consistent learning algorithm, i.e. those policies π = (πt)t such that Eπ,M (Ns,a(T )) = o (Tα) for all sub-optimal state-action pair (s, a) and 0 < α < 1 (see Agrawal et al. [1989]). Theorem 1 (Regret lower bound Burnetas and Katehakis [1997]). Let M = (S,A,p, r) be an MDP satisfying Assumptions 1, 2, 3. For all uniformly consistent learning algorithm π, lim inf T→∞ Eπ,M [Ns,a(T )] log T ⩾ 1 Ks,a (M) (10) with the convention that 1/∞ = 0. The regret lower bound is lim inf T→∞ Rπ (M, T ) log T ⩾ ∑ (s,a)∈C(M) ∆s,a (M) Ks,a (M) (11) where C (M) = { (s, a) : 0 < Ks,a (M) < ∞ } is called the set of critical state-action pairs. Those are the state-action pairs (s, a) that could be confused for an optimal one if we were to change their associated rewards and transitions distributions at the displacement cost of Ks,a (M). 3 The IMED-RL Algorithm In this section we introduce and detail the IMED-RL algorithm, whose regret matches this fundamental lower bound and extends the IMED strategy from Honda and Takemura [2015] to ergodic MDPs. Indeed, for a single-state MDP, that is a multi-armed bandit, IMED-RL simply reduces to IMED. Empirical quantities IMED-RL is a model-based algorithm that keeps empirical estimates of the transitions p and rewards r as opposed to model-free algorithm such as Q-learning. We denote by r̂t(s, a) = r̂(s, a;Ns,a(t)) and p̂t(s, a) = p̂(s, a;Ns,a(t)) the empirical reward distributions and transition vectors after t time steps, i.e. using Ns,a(t) samples from the distribution r(s, a). Initially, p̂(s, a; 0) is the uniform probability over the state space and p̂(s, a; k) = (1− 1/k)p̂(s, a; k − 1) + (1/k)sk, where sk is a vector of zeros except for a one at index sk, the kth samples drawn from p(·|s, a). This defines at each time step t an empirical MDP M̂t = (S,A, p̂t, r̂t). On this empirical MDP, for each state, some actions have been sampled more than others and their empirical quantities are therefore better estimated. We call skeleton at time t the subset of state-action pairs that can be considered sampled enough at time t; it is defined by restricting As to As(t) for all state s ∈ S , with As(t) = { a ∈ As : Ns,a(t) ⩾ log2 ( max a′∈As Nsa′(t) )} . (12) Since x> log2 x, As(t) ̸= ∅, hence A(t) = (As(t))s contains at least one deterministic policy. We note that the MDP M(A(t)) def= (S,A(t),p, r) defined by restricting the set of actions to A(t) ⊆ A is an ergodic MDP. The restricted empirical MDP M̂t(A(t)) def = (S,A(t), p̂t, r̂t) also is ergodic thanks to the ergodic initialization of the estimate p̂. Inspired by IMED, we define the IMED-RL index. Definition 3 (IMED-RL index). For all state-action pairs (s, a) ∈ XM, let us define Ks,a(t) def = Ks,a ( M̂t(A(t)), γ̂s(t) ) with empirical threshold γ̂s(t) def = max a∈As φM̂t(A(t)) (r̂(s, a)⊗ p̂(s, a)) Then, the IMED-RL index of (s, a) at time t, Hs,a(t), is defined as Hs,a(t) = Ns,a(t)Ks,a(t) + logNs,a(t) . (13) Note that γ̂s(t) ̸= γs(M̂t(A(t))) as the maximum is taken over all a∈As an not just a∈As(t). Known support of transitions Were the support of transition known, the infimum in sub-optimality cost Ks,a defined by Equation 9 would be redefined as one over the set {q∈P (S) : Supp(q) = Supp (p (·|s, a))}, modifying both the lower bound and IMED-RL index. IMED-RL algorithm The IMED-RL algorithm consists in playing at each time step t, an action at of minimal IMED-RL index at the current state st. The intuition behind the IMED-RL index is similar to the one of the IMED index for bandits and stems from an information theoretic point-of-view of the lower bound. At a given time t, the frequency of play Ns,a(t)Ns(t) of action a ∈ As in state s ∈ S, should be larger than or equal to its posterior probability of being the optimal action in that state, exp (−Ns,a(t)Ks,a (t)), that is to say Ns,a(t)Ns(t) ⩾ exp (−Ns,a(t)Ks,a (t)). Taking the logarithm and rearranging the terms, this condition rewrites Hs,a(t) ⩾ logNs(t) at each time step t. The action that is the closest to violate this condition or that violates this condition the most is the one of minimal IMED-RL index, argmina Hs,a(t), the one IMED-RL decides to play. Algorithm 1 IMED-RL: Indexed Minimum Empirical Divergence for Reinforcement Learning Require: State-Action space XM of MDP M, Assumptions 1, 2, 3 Require: Initial state s1 for t ⩾ 1 do Sample at ∈ arg min a∈Ast Hs,a(t) end for Intuitions of the IMED-RL algorithm root to the control theory of MDPs and optimal bandit theory; IMED-RL intertwines the two and the regret proof exactly follows from the following intuitions. Control In control theory, we assume that both the expected rewards and transitions probabilities of an MDP M are known. Policy iteration (see Puterman [1994], Bertsekas and Shreve [1978]) is an algorithm that computes a sequence (πn)n of deterministic policies that are increasingly strictly better until an optimal policy is reached. In the average-reward setting and under the ergodic assumption, a policy π is strictly better than another policy π′ if gπ (M) > gπ′ (M). The policy iteration algorithm computes the sequence of policies recursively in the following way. Initially, an arbitrary deterministic policy π0 is chosen. At step n + 1 ∈ N∗, it computes mπn and bπn then swipes through the states s ∈ S in an arbitrary order until it reaches one state s such that there exists a ∈ A(s) with m(s, a) + p(·|s, a)bπn > mπn(s) + pπ(s)bπn . If such an s does not exist, then it returns πn as an optimal policy. Otherwise, πn+1 is defined as πn+1(s′) = πn (s′) for all s ̸= s′ and πn+1(s) ∈ argmax {m(s, a) + p(·|s, a)bπn}. Such a step is called a policy improvement step. Policy iteration is guaranteed to finish in a finite number as the cardinal of Π(M) is finite. At each step n ∈ N∗, φM(πn) is a local function that takes into account the whole dynamic of the MDP and allows to compute, via an argmax, an optimal choice of improvement (or optimal action) based on local information; φM(πn)(r(s, a) ⊗ p(·|s, a)) = m(s, a) + p(s, a)bπn . IMED-RL uses φM̂(A(t)) and improves the skeleton similarly to policy iteration as it can be seen in the analysis 4. Bandit control A degenerate case of MDP would be one where there is only one state s with φM(φ) (r(s, a)) = m(s, a) by choosing the bias function to be zero6. Playing optimally consists in playing an action with largest expected reward at each time step t, at ∈ argmaxa∈As m(s, a). Bandit Learning occurs when rewards are unknown; this is the bandit problem. In that case, a lower bound on the regret similar to 1 exists. Under some assumptions on the reward distributions, optimal algorithms whose regret upper bounds asymptotically match the lower bound can derived. IMED Honda and Takemura [2015], KL-UCB Maillard et al. [2011], Cappé et al. [2013] are two such examples that use indexes, i.e. computes a number Is,a(t) at each time step and play at ∈ argmin Is,a(t). Such indexes are crafted to correctly handle the exploration-exploitation trade-off. RL in Ergodic MDPs The delayed rewards caused by the dynamic of the system is the main source of difficulty arising from having more than one state. IMED-RL combines control and bandit theory 6recall that the bias function is defined up to an additive constant in the following way. At each time step t, a restricted MDP M̂t(A(t)) is built from the empirical one M̂t. If the condition to belong to the skeleton is selective enough, then the potentials on the restricted empirical MDP M̂t(A(t)) may become close to those of the restricted true MDP M(A(t)), that is ∥φ M̂t(A(t)) − φM(A(t))∥∞ is small. We want to make policy improvements by finding, at each state s an action a′ ∈ argmaxφM(A(t))(r(s, a)⊗p(·|s, a)), play it enough that it belongs to the skeleton which will modify φ and repeat until φM(A(t)) = φM. Using φ, the global dynamic is reduced to a local function so that at each state, the agent is presented a bandit problem. This bandit problem is well estimated if ∥φ M̂t(A(t)) − φM(A(t))∥∞ is small. As opposed to the control setting, the learning agent cannot choose the state in which to make the policy improvement step and it may be possible that no policy improvement step is possible at state st. However, thanks to the ergodic assumption 2 the agent is guaranteed to visit such a state in finite time, if it exists. There is a trade-off between the adptativity of the skeleton, i.e. how quickly one can add an improving action to define a new φ, and concentration of statistical quantities defined on the restricted MDP. Related work Our notion of skeleton is built on the work of Burnetas and Katehakis [1997]. We improve on their original notion of skeleton by correcting some troubles happening in the small sample regime. In particular, this forces the authors to introduce some forcing mechanism. The issues of the original definition and improvement induced by ours are listed in Appendix G. One key point of our definition is that the skeleton is defined using only empirical quantities, the number of samples, and does not depends on some arbitrary reference, such as the absolute time. 4 Regret of IMED-RL In this section we state the main theoretical result of this paper, which consists in the IMED-RL regret upper bound. We then sketch a few key ingredients of the proof. Theorem 2 (Regret upper bound for Ergodic MDPs). Let M = (S,A,p, r) be an MDP satisfying assumptions 1, 2, 3. Let 0 < ε ⩽ 13 minπ∈Π(M) min (s,a)∈XM {|∆s,a (M(π)) | : |∆s,a (M(π)) | > 0}. The regret of IMED-RL is upper bounded, RIMED-RL (M, T ) ⩽ ∑ (s,a)∈C(M) ∆s,a (M) Ks,a (M)− εΓs (M) log T +O(1), (14) where Γs (M) is constant that depends on the MDP M and state s; it is made explicit in the proof detailed in Appendix D. A Taylor expansion allows to write the regret upper bound as RIMED-RL (M, T ) ⩽ ∑ (s,a)∈C(M) ∆s,a (M) Ks,a (M) log T +O ((log T )10/11) . (15) Were the semi-bounded reward assumption changed to a bounded reward one with known upper and lower bound, the O ( (log T ) 10/11 ) could be made a O(1) as explained in Appendix E. Theorem 3 (Asymptotic Optimality). IMED-RL is asymptotically optimal, that is, lim T→+∞ RIMED-RL (M, T ) log T ⩽ ∑ (s,a)∈C(M) ∆s,a (M) Ks,a (M) . (16) The proof of Theorem 3 is immediate from Theorem 2 by first dividing Equation 14 by log T , then by taking the limit T → ∞, and finally taking the limit ε → 0. Remark While the regret lower bound, Theorem 1, is asymptotic by nature, our main Theorem 2 states a finite time upper bound on the regret of IMED-RL. Indeed, both Equations 14 and 15 are valid for all time T . The term O(1) appearing in Equation 14 does not depend on time T and is a constant that depends on both the MDP M and ε. This dependency is hard to be made explicit as this term is computed as limits of convergent series that are derived in the proof, see Appendix D. In Equation 14, the constant ∑ (s,a)∈C(M) ∆s,a(M) Ks,a(M)−εΓs(M) in front of log T does not exactly match the asymptotic upper bound ∑ (s,a)∈C(M) ∆s,a(M) Ks,a(M) because of the ε-term in the denominators. Equation 15 states that using a bounded reward hypothesis, instead of semi-bounded, allows the constant in front of the leading log T term to exactly match the asymptotic one, even in the finite time regret upper bound. In both cases, Theorem 3 states that asymptotic optimality is achieved. This Theorem proves the optimality of IMED-RL since the upper bound on the regret matches the lower bound of Theorem 1. Such a bound was asymptotically matched by the algorithm proposed by Burnetas and Katehakis [1997] and we recall that this algorithm and its problems are discussed in Appendix G. On the other hand, the current state-of-the-art algorithms UCRL3 and PSRL, while having some theoretical guarantees, have not been proved to match the regret lower bound. On the practical side, Q-learning is often used without much theoretical guarantee because of its usually strong practical performances. In the experiments, we will compare IMED-RL to those three algorithms. Related work Theorems 2 and 3 prove that IMED-RL is achieving the optimal rate of exploration (in the exploitation-exploration tradeoff sense) for ergodic MDPs. Its theoretical guarantees are problem-dependent rather than worst-case/min-max. Comparing to the log T bound derived for UCRL in Theorem 4 of Jaksch et al. [2010], less known than the √ T bound, shows the benefit of our analysis for each instance, as we improve the constant factors in the leading terms: their dependency is 34D2S2A/∆, where ∆ is a sub-optimality gap and D the diameter of the MDP. Sketch of proof Though a full proof is given in Appendix D, we sketch here the main proof ideas that follow directly from the intuitions behind the IMED-RL conception. The regret is decomposed into two terms, the bandit term when the local bandit problems defined by φ M̂t(A(t)) is well estimated, and the skeleton improvement term that controls the probability that the local bandit problem is not well estimated. This second term is managed by controlling the number of policy improvement steps and using concentration properties of empirical quantities defined on the skeleton. The main Theorem 2 follows from the following proposition that is proved in Appendix D. Recall from Lemma 1 that for all state-action pair x ∈ XM, Nx(T ) = ∑T t=1 1 {(st, at) = x} counts the number of time the state-action pair x has been sampled. Proposition 1. For all state-action pair x ∈ XM, for all ε > 0, Nx(t) ⩽ Bx(T ) + S(T ), (17) where we introduced the bandit term, Bx(T ), and the skeleton improvement term, S(T ), Bx(T ) = T∑ t=1 1 { xt = x,O ( M̂t (A(t)) ) ⊆ O (M) , ∥bM̂t(A(t)) − bM∥∞ ⩽ ε } , (18) S(T ) = T∑ t=1 1 { O ( M̂t (A(t)) ) ⊆ O (M) , ∥bM̂t(A(t)) − bM∥∞ ⩽ ε } . (19) Furthermore, E (S(T )) = O(1), E (Bx(T )) = O(1) for a non-critical state-action pair, while for a critical state-action pair x, E (Bx(T )) ⩽ ∆x (M) Kx (M)− εΓs (M) log T +O(1) 5 Numerical experiments In this section, we discuss the practical implementation and numerical aspects of IMED-RL and extend the discussion in Appendix F. Source code is available on github7. Computing IMED-RL index At each time step, we run the value iteration algorithm on M̂t(A(t)) to compute the optimal bias and the associated potential function φ M̂t(A(t)). This task is standard. Once done, one must compute the value of the optimization problem Ks,a (t) which belongs to the category of convex optimization problem with linear constraint. Such problems have been studied 7Plain text URL is https://github.com/fabienpesquerel/IMED-RL under the name of partially-finite convex optimization, e.g. in Borwein and Lewis [1991]. It is possible to compute Ks,a (t) by considering the Legendre-Fenchel dual and one does not need to compute the optimal distribution to know the value of the optimization problem. Proposition 2 (Index computation, Honda and Takemura [2015] Theorem 2). Let (s, a) be in XM, M = mmax(s, a) + max s′∈S bM(s), and γ > φM(r(s, a)⊗ p(·|s, a)), then Ks,a (M, γ) = { max 0⩽x⩽ 1M−γ E R∼r(s,a) S∼p(·|s,a) [ log ( 1− ( R+ bM(S)− γ ) x )] if M > γ +∞ otherwise . (20) If γ ⩽ φM(r(s, a)⊗ p(·|s, a)), then Ks,a (M, γ) = 0. In particular, this Proposition 2 sometimes allows to write Ks,a (t) almost in close form, e.g. when Fs,a defined in Asumptions 3 is a set of multinomials with unknown support (and only the upper bound mmax is known). In Appendix F, we discuss this numerical computation further. Computational complexity In terms of state and actions spaces sizes, the complexity of IMED-RL at each time step scales as O(S2A), the complexity of value iteration. Indeed, at each time step, IMED-RL runs value iteration using actions available in the skeleton, then computes the indexes of the available actions at the current state, and finally pick an argmin. The complexity of value iteration is O(S2A), the complexity of computing the A necessary indexes is O(SA), and the complexity of picking an argmin amongst those A indexes is O(A). Therefore, the per-time-step complexity of IMED-RL scales as O(S2A). However, this scaling is mainly an upper-bound as value iteration is run with actions that are within the skeleton. By design of the skeleton, we experimentally observe that, after some time, the skeleton contains one action per state (the optimal one). We provide more details in Appendix F, Lazy update paragraph. Practical comparison In practice, most of the complexity of IMED-RL is in the analysis rather than in the algorithm: compared to PSRL and UCRL3, IMED-RL does not take a confidence parameter nor any hyperparameter. Also, IMED-RL uses value iteration as a routine, which is faster than the extended value iteration used in UCRL3. Q-learning technically takes an exploration parameter (ε-greedy exploration) or exploration scheme when it is slowly decreased with time. Environments In different environments, we illustrate in Figure 2 and Figure 3 the performance of IMED-RL against the strategies UCRL3 Bourel et al. [2020], PSRL Osband et al. [2013] and Qlearning (run with discount γ = 0.99 and optimistic initialization). As stated during the introduction, any finite communicating MDP can be turned into an ergodic one, since on such MDPs, any stochastic policy π : S → P (As) with full support Supp (π(s)) = As is ergodic. Hence by mixing its transition p with that obtained from playing a uniform policy, formally pε(·|s, a) = (1− ε)p(·|s, a) + ε ∑ a′∈As p(·|s, a′)/|As|, for an arbitrarily small ε > 0 one obtain an ergodic MDP. In the experiments, we consider an ergodic version of the classical n-state river-swim environment, 2-room and 4-room with ε = 10−3, and classical communicating versions (ε = 0). n-states RiverSwim environment As illustrated by Figure 2, the performances of IMED-RL are particularly good and the regret of IMED-RL is below the regrets of all its competitors, even when the MDP is communicating only. This numerical performance grounds numerically the previous theoretical analysis. While using IMED-RL in communicating MDPs is not endorsed by our theoretically analysis, it is interesting to see how much this hypothesis amounts in the numerical performances of IMED-RL. We therefore ran an experiment on another classical environment, 2-rooms. n-rooms environment As illustrated by Figure 3, the performances of IMED-RL are particularly good, even surprisingly good, in this communicating only environment. Those experiments are a clue that the IMED-RL strategy may still be reasonable, although not necessarily optimal in some communicating MDPs. All experiments take less than an hour to run on a standard CPU. Future work Although not intended for non-ergodic MDPs, IMED-RL exhibits state-of-the-art numerical performances in communicating only MDPs (see Appendix F.2 for additional experiments). Hence, IMED-RL might prove to be insightful to pave the way towards the communicating case as it seems possible to get a controlled regret also in the case of communicating MDPs. Both the problem-dependent and worst-case regret bounds are interesting in this regard. Another direction we intend to explore is the adaptation of IMED-RL main ideas to function approximation frameworks, such as neural networks and kernel methods. Conclusion In this paper, we introduced IMED-RL, a numerically efficient algorithm to solve the average-reward criterion problem under the ergodic assumption for which we derive an upper bound on the regret matching the known regret lower bound. Further, its surprisingly good numerical performances in communicating only MDPs open the path to future work in MDPs that are communicating only. Acknowledgments and Disclosure of Funding This work has been supported by the French Ministry of Higher Education and Research, Inria, Scool, the Hauts-de-France region, the MEL and the I-Site ULNE regarding project R-PILOTE-19-004APPRENF. The PhD of Fabien Pesquerel is supported by a grant from École Normale Supérieure.
1. What is the focus and contribution of the paper on regret minimization in ergodic undiscounted MDPs? 2. What are the strengths of the proposed IMED-RL algorithm, particularly in terms of its theoretical guarantees and practical efficiency? 3. How does IMED-RL compare to other exploration algorithms, such as UCRL and PSRL, in terms of worst-case regret and practical advantages? 4. Do you have any concerns regarding the originality of the work, given that it builds upon established frameworks and existing algorithmic ideas? 5. What are some limitations of the paper, especially regarding the scalability of IMED-RL to problems with many states and actions?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper considers regret minimization in ergodic undiscounted MDPs under the average-reward criterion. It builds upon the classic results by Burnetas and Katehakis (1997) for this setting, and provides an adaptation of the IMED bandit algorithm (Honda and Takemura, 2015) to this setting. This algorithm, called IMED-RL, matches asymptotically the lower bound by Burnetas and Katehakis, and can thus be considered optimal. The theoretical results are complemented by numerical simulations showing that IMED-RL is very efficient in practice. Strengths And Weaknesses The paper is well written and clear. The authors do a great job in explaining the theoretical framework, which is classic but is also a notoriously complicated one. The difference between existing results and original contributions is also well explained. I would only spend more words on describing the original IMED (bandit) algorithm of which IMED-RL is an adaptation to MDPs. Originality is limited, since the work builds upon a well established theoretical framework and an existing algorithmic idea. However, there is some originality in the theoretical analysis, for instance in how the concept of skeleton. The results are significant for the area of efficient exploration in RL. As mentioned by the authors, it is not easy to match the lower bound by B&K with a reasonably practical algorithm. Instead, besides its strong theoretical guarantees, IMED-RL shows good performance in practice too. Questions How would the worst-case regret of IMED-RL compare with the one of UCRL and PSRL? What do you think is the advantage of IMDEL-RL over the other exploration algorithms in practice? Minor: line 19: communicative -> communicating Definition 1: s_1 should be s or vice-versa line 92: why is UCB "infamous"? line 124: communication -> communicating Footnote 2: square bracket line 258: "numerical issues" is an odd choice of words, it usually refers to numerical problems in the implementation. I think you mean something like "empirical aspects" line 273: missing space after PSRL Limitations Limitations should be discussed more. For instance, how does IMED-RL scale to problems with many states and actions?
NIPS
Title Distribution-Independent PAC Learning of Halfspaces with Massart Noise Abstract We study the problem of distribution-independent PAC learning of halfspaces in the presence of Massart noise. Specifically, we are given a set of labeled examples (x, y) drawn from a distribution D on R such that the marginal distribution on the unlabeled points x is arbitrary and the labels y are generated by an unknown halfspace corrupted with Massart noise at noise rate η < 1/2. The goal is to find a hypothesis h that minimizes the misclassification error Pr(x,y)∼D [h(x) 6= y]. We give a poly (d, 1/ ) time algorithm for this problem with misclassification error η + . We also provide evidence that improving on the error guarantee of our algorithm might be computationally hard. Prior to our work, no efficient weak (distribution-independent) learner was known in this model, even for the class of disjunctions. The existence of such an algorithm for halfspaces (or even disjunctions) has been posed as an open question in various works, starting with Sloan (1988), Cohen (1997), and was most recently highlighted in Avrim Blum’s FOCS 2003 tutorial. 1 Introduction Halfspaces, or Linear Threshold Functions (henceforth LTFs), are Boolean functions f : Rd → {±1} of the form f(x) = sign(〈w,x〉 − θ), where w ∈ Rd is the weight vector and θ ∈ R is the threshold. (The function sign : R → {±1} is defined as sign(u) = 1 if u ≥ 0 and sign(u) = −1 otherwise.) The problem of learning an unknown halfspace is as old as the field of machine learning — starting with Rosenblatt’s Perceptron algorithm [Ros58] — and has arguably been the most influential problem in the development of the field. In the realizable setting, LTFs are known to be efficiently learnable in Valiant’s distribution-independent PAC model [Val84] via Linear Programming [MT94]. In the presence of corrupted data, the situation is more subtle and crucially depends on the underlying noise model. In the agnostic model [Hau92, KSS94] – where an adversary is allowed to arbitrarily corrupt an arbitrary η < 1/2 fraction of the labels – even weak learning is known to be computationally intractable [GR06, FGKP06, Dan16]. On the other hand, in the presence of Random Classification Noise (RCN) [AL88] – where each label is flipped independently with probability exactly η < 1/2 – a polynomial time algorithm is known [BFKV96, BFKV97]. In this work, we focus on learning halfspaces with Massart noise [MN06]: Definition 1.1 (Massart Noise Model). Let C be a class of Boolean functions over X = Rd, Dx be an arbitrary distribution over X , and 0 ≤ η < 1/2. Let f be an unknown target function in C. A noisy example oracle, EXMas(f,Dx, η), works as follows: Each time EXMas(f,Dx, η) is invoked, it 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. returns a labeled example (x, y), where x ∼ Dx, y = f(x) with probability 1−η(x) and y = −f(x) with probability η(x), for an unknown parameter η(x) ≤ η. Let D denote the joint distribution on (x, y) generated by the above oracle. A learning algorithm is given i.i.d. samples from D and its goal is to output a hypothesis h such that with high probability the error Pr(x,y)∼D[h(x) 6= y] is small. An equivalent formulation of the Massart model [Slo88, Slo92] is the following: With probability 1− η, we have that y = f(x), and with probability η the label y is controlled by an adversary. Hence, the Massart model lies in between the RCN and the agnostic models. (Note that the RCN model corresponds to the special case that η(x) = η for all x ∈ X .) It is well-known (see, e.g., [MN06]) that poly(d, 1/ ) samples information-theoretically suffice to compute a hypothesis with misclassification error OPT + , where OPT is the misclassification error of the optimal halfspace. Also note that OPT ≤ η by definition. The question is whether a polynomial time algorithm exists. The existence of an efficient distribution-independent learning algorithm for halfspaces (or even disjunctions) in the Massart model has been posed as an open question in a number of works. In the first COLT conference [Slo88] (see also [Slo92]), Sloan defined the malicious misclassification noise model (an equivalent formulation of Massart noise, described above) and asked whether there exists an efficient learning algorithm for disjunctions in this model. About a decade later, Cohen [Coh97] asked the same question for the more general class of all LTFs. The question remained open — even for weak learning of disjunctions! — and was highlighted in Avrim Blum’s FOCS 2003 tutorial [Blu03]. Specifically, prior to this work, even the following very basic special case remained open: Given labeled examples from an unknown disjunction, corrupted with 1% Massart noise, can we efficiently find a hypothesis that achieves misclassification error 49%? The reader is referred to slides 39-40 of Avrim Blum’s FOCS’03 tutorial [Blu03], where it is suggested that the above problem might be easier than agnostically learning disjunctions. As a corollary of our main result (Theorem 1.2), we answer this question in the affirmative. In particular, we obtain an efficient algorithm that achieves misclassification error arbitrarily close to η for all LTFs. 1.1 Our Results The main result of this paper is the following: Theorem 1.2 (Main Result). There is an algorithm that for all 0 < η < 1/2, on input a set of i.i.d. examples from a distribution D = EXMas(f,Dx, η) on Rd+1, where f is an unknown halfspace on Rd, it runs in poly(d, b, 1/ ) time, where b is an upper bound on the bit complexity of the examples, and outputs a hypothesis h that with high probability satisfies Pr(x,y)∼D[h(x) 6= y] ≤ η + . See Theorem 2.9 for a more detailed formal statement. For large-margin halfspaces, we obtain a slightly better error guarantee; see Theorem 2.2 and Remark 2.6. Discussion. We note that our algorithm is non-proper, i.e., the hypothesis h itself is not a halfspace. The polynomial dependence on b in the runtime cannot be removed, even in the noiseless case, unless one obtains strongly-polynomial algorithms for linear programming. Finally, we note that the misclassification error of η translates to error 2η + with respect to the target LTF. Our algorithm gives error η + , instead of the information-theoretic optimum of OPT + . To complement our positive result, we provide some evidence that improving on our (η + ) error guarantee may be challenging. Roughly speaking, we show (see Theorems B.1 and B.2 in the supplementary material) that natural approaches — involving convex surrogates and refinements thereof — inherently fail, even under margin assumptions. (See Section 1.2 for a discussion.) Broader Context. This work is part of the broader agenda of designing robust estimators in the distribution-independent setting with respect to natural noise models. A recent line of work [KLS09, ABL17, DKK+16, LRV16, DKK+17, DKK+18, DKS18, KKM18, DKS19, DKK+19] has given efficient robust estimators for a range of learning tasks (both supervised and unsupervised) in the presence of a small constant fraction of adversarial corruptions. A limitation of these results is the assumption that the good data comes from a “tame” distribution, e.g., Gaussian or isotropic log-concave distribution. On the other hand, if no assumption is made on the good data and the noise remains fully adversarial, these problems become computationally intractable [Ber06, GR06, Dan16]. This suggests the following general question: Are there realistic noise models that allow for efficient algorithms without imposing (strong) assumptions on the good data? Conceptually, the algorithmic results of this paper could be viewed as an affirmative answer to this question for the problem of learning halfspaces. 1.2 Technical Overview In this section, we provide an outline of our approach and a comparison to previous techniques. Since the distribution on the unlabeled data is arbitrary, we can assume w.l.o.g. that the threshold θ = 0. Massart Noise versus RCN. Random Classification Noise (RCN) [AL88] is the special case of Massart noise where each label is flipped with probability exactly η < 1/2. At first glance, it might seem that Massart noise is easier to deal with computationally than RCN. After all, in the Massart model we add at most as much noise as in the RCN model. It turns out that this intuition is fundamentally flawed. Roughly speaking, the ability of the Massart adversary to choose whether to perturb a given label and, if so, with what probability (which is unknown to the learner), makes the design of efficient algorithms in this model challenging. In particular, the well-known connection between learning with RCN and the Statistical Query (SQ) model [Kea93, Kea98] no longer holds, i.e., the property of being an SQ algorithm does not automatically suffice for noise-tolerant learning with Massart noise. We note that this connection with the SQ model is leveraged in [BFKV96, BFKV97] to obtain their polynomial time algorithm for learning halfspaces with RCN. Large Margin Halfspaces. To illustrate our approach, we start by describing our learning algorithm for γ-margin halfspaces on the unit ball. That is, we assume |〈w∗,x〉| ≥ γ for every x in the support, where w∗ ∈ Rd with ‖w∗‖2 = 1 defines the target halfspace hw∗(x) = sign(〈w∗,x〉). Our goal is to design a poly(d, 1/ , 1/γ) time learning algorithm in the presence of Massart noise. In the RCN model, the large margin case is easy because the learning problem is essentially convex. That is, there is a convex surrogate that allows us to formulate the problem as a convex program. We can use SGD to find a near-optimal solution to this convex program, which automatically gives a strong proper learner. This simple fact does not appear explicitly in the literature, but follows easily from standard tools. [Byl94] showed that a variant of the Perceptron algorithm (which can be viewed as gradient descent on a particular convex objective) learns γ-margin halfspaces in poly(d, 1/ , 1/γ) time. The algorithm in [Byl94] requires an additional anti-concentration condition about the distribution, which is easy to remove. In Appendix C, we show that a “smoothed” version of Bylander’s objective suffices as a convex surrogate under only the margin assumption. Roughly speaking, the reason that a convex surrogate works for RCN is that the expected effect of the noise on each label is known a priori. Unfortunately, this is not the case for Massart noise. We show (Theorem B.1 in Appendix B) that no convex surrogate can lead to a weak learner, even under a margin assumption. That is, if ŵ is the minimizer of G(w) = E(x,y)∼D[φ(y〈w,x〉)], where φ can be any convex function, then the hypothesis sign(〈ŵ,x〉) is not even a weak learner. So, in sharp contrast with the RCN case, the problem is non-convex in this sense. Our Massart learning algorithm for large margin halfspaces still uses a convex surrogate, but in a qualitatively different way. Instead of attempting to solve the problem in one-shot, our algorithm adaptively applies a sequence of convex optimization problems to obtain an accurate solution in disjoint subsets of the space. Our iterative approach is motivated by a new structural lemma (Lemma 2.5) establishing the following: Even though minimizing a convex proxy does not lead to small misclassification error over the entire space, there exists a region with non-trivial probability mass where it does. Moreover, this region is efficiently identifiable by a simple thresholding rule. Specifically, we show that there exists a threshold T > 0 (which can be found algorithmically) such that the hypothesis sign(〈ŵ,x〉) has error bounded by η + in the region RT = {x : |〈ŵ,x〉| ≥ T}. Here ŵ is any near-optimal solution to an appropriate convex optimization problem, defined via a convex surrogate objective similar to the one used in [Byl94]. We note that Lemma 2.5 is the main technical novelty of this paper and motivates our algorithm. Given Lemma 2.5, in any iteration i we can find the best threshold T (i) using samples, and obtain a learner with misclassification error η + in the corresponding region. Since each region has non-trivial mass, iterating this scheme a small number of times allows us to find a non-proper hypothesis (a decision-list of halfspaces) with misclassification error at most η + in the entire space. The idea of iteratively optimizing a convex surrogate was used in [BFKV96] to learn halfspaces with RCN without a margin. Despite this similarity, we note that the algorithm of [BFKV96] fails to even obtain a weak learner in the Massart model. We point out two crucial technical differences: First, the iterative approach in [BFKV96] was needed to achieve polynomial running time. As mentioned already, a convex proxy is guaranteed to converge to the true solution with RCN, but the convergence may be too slow (when the margin is tiny). In contrast, with Massart noise (even under a margin condition) convex surrogates cannot even give weak learning in the entire domain. Second, the algorithm of [BFKV96] used a fixed threshold in each iteration, equal to the margin parameter obtained after an appropriate pre-processing of the data (that is needed in order to ensure a weak margin property). In contrast, in our setting, we need to find an appropriate threshold T (i) in each iteration i, according to the criterion specified by our Lemma 2.5. General Case. Our algorithm for the general case (in the absence of a margin) is qualitatively similar to our algorithm for the large margin case, but the details are more elaborate. We borrow an idea from [BFKV96] that in some sense allows us to “reduce” the general case to the large margin case. Specifically, [BFKV96] (see also [DV04a]) developed a pre-processing routine that slightly modifies the distribution on the unlabeled points and guarantees the following weak margin property: After preprocessing, there exists an explicit margin parameter σ = Ω(1/poly(d, b)), such that any hyperplane through the origin has at least a non-trivial mass of the distribution at distance at least σ from it. Using this pre-processing step, we are able to adapt our algorithm from the previous subsection to work without margin assumptions in poly(d, b, 1/ ) time. While our analysis is similar in spirit to the case of large margin, we note that the margin property obtained via the [BFKV96, DV04a] preprocessing step is (necessarily) weaker, hence additional careful analysis is required. Lower Bounds Against Natural Approaches. We have already explained our Theorem B.1, which shows that using a convex surrogate over the entire space cannot not give a weak learner. Our algorithm, however, can achieve error η + by iteratively optimizing a specific convex surrogate in disjoint subsets of the domain. A natural question is whether one can obtain qualitatively better accuracy, e.g., f(OPT)+ , by using a different convex objective function in our iterative thresholding approach. We show (Theorem B.2) that such an improvement is not possible: Using a different convex proxy cannot lead to error better than (1− o(1)) · η. It is a plausible conjecture that improving on the error guarantee of our algorithm is computationally hard. We leave this as an intriguing open problem for future work. 1.3 Prior and Related Work Bylander [Byl94] gave a polynomial time algorithm to learn large margin halfspaces with RCN (under an additional anti-concentration assumption). The work of Blum et al. [BFKV96, BFKV97] gave the first polynomial time algorithm for distribution-independent learning of halfspaces with RCN without any margin assumptions. Soon thereafter, [Coh97] gave a polynomial-time proper learning algorithm for the problem. Subsequently, Dunagan and Vempala [DV04b] gave a rescaled perceptron algorithm for solving linear programs, which translates to a significantly simpler and faster proper learning algorithm. The term “Massart noise” was coined after [MN06]. An equivalent version of the model was previously studied by Rivest and Sloan [Slo88, Slo92, RS94, Slo96], and a very similar asymmetric random noise model goes back to Vapnik [Vap82]. Prior to this work, essentially no efficient algorithms with non-trivial error guarantees were known in the distribution-free Massart noise model. It should be noted that polynomial time algorithms with error OPT+ are known [ABHU15, ZLC17, YZ17] when the marginal distribution on the unlabeled data is uniform on the unit sphere. For the case that the unlabeled data comes from an isotropic log-concave distribution, [ABHZ16] give a d2 poly(1/(1−2η)) /poly( ) sample and time algorithm. 1.4 Preliminaries For n ∈ Z+, we denote [n] def = {1, . . . , n}. We will use small boldface characters for vectors and we let ei denote the i-th vector of an orthonormal basis. For x ∈ Rd, and i ∈ [d], xi denotes the i-th coordinate of x, and ‖x‖2 def = ( ∑d i=1 x 2 i ) 1/2 denotes the `2-norm of x. We will use 〈x,y〉 for the inner product between x,y ∈ Rd. We will use E[X] for the expectation of random variable X and Pr[E ] for the probability of event E . An origin-centered halfspace is a Boolean-valued function hw : Rd → {±1} of the form hw(x) = sign (〈w,x〉), where w ∈ Rd. (Note that we may assume w.l.o.g. that ‖w‖2 = 1.) We denote byHd the class of all origin-centered halfspaces on Rd. We consider a classification problem where labeled examples (x, y) are drawn i.i.d. from a distributionD. We denote byDx the marginal ofD on x, and for any x denoteDy(x) the distribution of y conditional on x. Our goal is to find a hypothesis classifier h with low misclassification error. We will denote the misclassification error of a hypothesis h with respect to D by errD0−1(h) = Pr(x,y)∼D[h(x) 6= y]. Let OPT = minh∈Hd errD0−1(h) denote the optimal misclassification error of any halfspace, and w∗ be the normal vector to a halfspace hw∗ that achieves this. 2 Algorithm for Learning Halfspaces with Massart Noise In this section, we present the main result of this paper, which is an efficient algorithm that achieves η + misclassification error for distribution-independent learning of halfspaces with Massart noise η. Our algorithm uses (stochastic) gradient descent on a convex proxy function L(w) for the misclassification error to identify a region with small misclassification error. The loss function penalizes the points which are misclassified by the threshold function hw, proportionally to the distance from the corresponding hyperplane, while rewards the correctly classified points at a smaller rate. Directly optimizing this convex objective does not lead to a separator with low error, but guarantees that for a non-negligible fraction of the mass away from the separating hyperplane the misclassification error will be at most η + . Classifying points in this region according to the hyperplane and recursively working on the remaining points, we obtain an improper learning algorithm that achieves η + error overall. We now develop some necessary notation before proceeding with the description and analysis of our algorithm. Our algorithm considers the following convex proxy for the misclassification error as a function of the weight vector w: L(w) = E (x,y)∼D [LeakyReluλ(−y〈w,x〉)] , under the constraint ‖w‖2 ≤ 1, where LeakyReluλ(z) = { (1− λ)z if z ≥ 0 λz if z < 0 and λ is the leakage parameter, which we will set to be λ ≈ η. We define the per-point misclassification error and the error of the proxy function as err(w,x) = Pry∼Dy(x)[w(x) 6= y] and `(w,x) = Ey∼Dy(x)[LeakyReluλ(−y〈w,x〉)] respectively. Notice that errD0−1(hw) = Ex∼Dx [err(w,x)] and L(w) = Ex∼Dx [`(w,x)]. Moreover, OPT = Ex∼Dx [err(w ∗,x)] = Ex∼Dx [η(x)]. Relationship between proxy loss and misclassification error We first relate the proxy loss and the misclassification error. Claim 2.1. For any w,x, we have that `(w,x) = (err(w,x)− λ)|〈w,x〉|. Proof. We consider two cases: • Case sign(〈w,x〉) = sign(〈w∗,x〉): In this case, we have that err(w,x) = η(x), while `(w,x) = η(x)(1− λ)|〈w,x〉| − (1− η(x))λ|〈w,x〉| = (η(x)− λ)|〈w,x〉|. • Case sign(〈w,x〉) 6= sign(〈w∗,x〉): In this case, we have that err(w,x) = 1 − η(x), while `(w,x) = (1− η(x))(1− λ)|〈w,x〉| − η(x)λ|〈w,x〉| = (1− η(x)− λ)|〈w,x〉|. This completes the proof of Claim 2.1. Claim 2.1 shows that minimizing Ex∼Dx [ `(w,x) |〈w,x〉| ] is equivalent to minimizing the misclassification error. Unfortunately, this objective is hard to minimize as it is non-convex, but one would hope that minimizing L(w) instead may have a similar effect. As we show, this is not true because |〈w,x〉| might vary significantly across points, and in fact it is not possible to use a convex proxy that achieves bounded misclassification error directly. Our algorithm circumvents this difficulty by approaching the problem indirectly to find a nonproper classifier. Specifically, our algorithm works in multiple rounds, where within each round only points with high value of |〈w,x〉| are considered. The intuition is based on the fact that the approximation of the convex proxy to the misclassification error is more accurate for those points that have comparable distance to the halfspace. In Section 2.1, we handle the large margin case and in Section 2.2 we handle the general case. 2.1 Warm-up: Learning Large Margin Halfspaces We consider the case that there is no probability mass within distance γ from the separating hyperplane 〈w∗,x〉 = 0, ‖w∗‖2 = 1. Formally, assume that for every x ∼ Dx, ‖x‖2 ≤ 1 and that 〈w∗,x〉 ≥ γ. The pseudo-code of our algorithm is given in Algorithm 1. Our algorithm returns a decision list [(w(1), T (1)), (w(2), T (2)), · · · ] as output. To classify a point x given the decision list, the first i is identified such that |〈w(i),x〉| ≥ T (i) and sign(〈w(i),x〉) is returned. If no such i exists, an arbitrary prediction is returned. Algorithm 1 Main Algorithm (with margin) 1: Set S(1) = Rd, λ = η + , m = Õ( 1γ2 4 ). 2: Set i← 1. 3: Draw O ( (1/ 2) log(1/( γ)) ) samples from Dx to form an empirical distribution D̃x. 4: while Prx∼D̃x [ x ∈ S(i) ] ≥ do 5: Set D(i) = D|S(i) , the distribution conditional on the unclassified points. 6: Let L(i)(w) = E(x,y)∼D(i) [LeakyReluλ(−y〈w,x〉)] 7: Run SGD on L(i)(w) for Õ(1/(γ2 2)) iterations to get w(i) with ‖w(i)‖2 = 1 such that L(i)(w(i)) ≤ minw:‖w‖2≤1 L(i)(w) + γ /2. 8: Draw m samples from D(i) to form an empirical distribution D(i)m . 9: Find a threshold T (i) such that Pr (x,y)∼D(i)m [|〈w(i),x〉| ≥ T (i)] ≥ γ and the empirical misclassification error, Pr (x,y)∼D(i)m [hw(i)(x) 6= y ∣∣ |〈w(i),x〉| ≥ T (i)], is minimized. 10: Update the unclassified region S(i+1) ← S(i) \ {x : |〈w(i),x〉| ≥ T (i)} and set i← i+ 1. 11: Return the classifier [(w(1), T (1)), (w(2), T (2)), · · · ] The main result of this section is the following: Theorem 2.2. Let D be a distribution on Bd × {±1} such that Dx satisfies the γ-margin property with respect to w∗ and y is generated by sign(〈w∗,x〉) corrupted with Massart noise at rate η < 1/2. Algorithm 1 uses Õ(1/(γ3 5)) samples from D, runs in poly(d, 1/ , 1/γ) time, and returns, with probability 2/3, a classifier h with misclassification error errD0−1(h) ≤ η + . Our analysis focuses on a single iteration of Algorithm 1. We will show that a large fraction of the points is classified at every iteration within error η + . To achieve this, we analyze the convex objective L. We start by showing that the optimal classifier w∗ obtains a significantly negative objective value. Lemma 2.3. If λ ≥ η, then L(w∗) ≤ −γ(λ−OPT). Proof. For any fixed x, using Claim 2.1, we have that `(w∗,x) = (err(w∗,x)− λ)|〈w∗,x〉| = (η(x)− λ)|〈w∗,x〉| ≤ −γ(λ− η(x)) , since |〈w∗,x〉| ≥ γ and η(x)− λ ≤ 0. Taking expectation over x ∼ Dx, the statement follows. Lemma 2.3 is the only place where the Massart noise assumption is used in our approach and establishes that points with sufficiently negative value exist. As we will show, any weight vector w with this property can be found with few samples and must accurately classify some region of non-negligible mass away from it (Lemma 2.5). We now argue that we can use stochastic gradient descent (SGD) to efficiently identify a point w that achieves comparably small objective value to the guarantee of Lemma 2.3. We use the following standard property of SGD: Lemma 2.4 (see, e.g., Theorem 3.4.11 in [Duc16]). Let L be any convex function. Consider the (projected) SGD iteration that is initialized at w(0) = 0 and for every step computes w(t+ 1 2 ) = w(t) − ρv(t) and w(t+1) = arg min w:‖w‖2≤1 ∥∥∥w −w(t+ 12 )∥∥∥ 2 , where v(t) is a stochastic gradient such that for all steps E[v(t)|w(t)] ∈ ∂L(w(t)) and ∥∥v(t)∥∥ 2 ≤ 1. Assume that SGD is run for T iterations with step size ρ = 1√ T and let w̄ = 1T ∑T t=1 w (t). Then, for any , δ > 0, after T = Ω(log(1/δ)/ 2) iterations with probability with probability at least 1− δ we have that L(w̄) ≤ minw:‖w‖2≤1 L(w) + . By Lemma 2.3, we know that minw:‖w‖2≤1 L(w) ≤ −γ(λ−OPT). By Lemma 2.4, it follows that by running SGD on L(w) with projection to the unit `2-ball for O ( log(1/δ)/(γ2(λ−OPT)2) ) steps, we find a w such that L(w) ≤ −γ(λ−OPT)/2 with probability at least 1− δ. Note that we can assume without loss of generality that ‖w‖2 = 1, as increasing the magnitude of w only decreases the objective value. We now consider the misclassification error of the halfspace hw conditional on the points that are further than some distance T from the separating hyperplane. We claim that there exists a threshold T > 0 where the restriction has non-trivial mass and the conditional misclassification error is small: Lemma 2.5. Consider a vector w with L(w) < 0. There exists a threshold T ≥ 0 such that (i) Pr(x,y)∼D[|〈w,x〉| ≥ T ] ≥ |L(w)|2λ , and (ii) Pr(x,y)∼D[hw(x) 6= y ∣∣ |〈w,x〉| ≥ T ] ≤ λ− |L(w)|2 . Proof. We will show there is a T ≥ 0 such that Pr(x,y)∼D[hw(x) 6= y ∣∣ |〈w,x〉| ≥ T ] ≤ λ − ζ, where ζ def= |L(w)|/2, or equivalently, Ex∼Dx [(err(w,x)− λ+ ζ)1|〈w,x〉|≥T ] ≤ 0. For a T drawn uniformly at random in [0, 1], we have that:∫ 1 0 E x∼Dx [(err(w,x)− λ+ ζ)1|〈w,x〉|≥T ]dT = Ex∼Dx [(err(w,x)− λ)|〈w,x〉|] + ζEx∼Dx [|〈w,x〉|] ≤ Ex∼Dx [`(w,x)] + ζ = L(w) + ζ = L(w)/2 < 0 . Thus, there exists a T̄ such that Ex∼Dx [(err(w,x)−λ+ ζ)1|〈w,x〉|≥T̄ ] ≤ 0. Consider the minimum such T̄ . Then we have∫ 1 T̄ Ex∼Dx [(err(w,x)− λ+ ζ)1|〈w,x〉|≥T ]dT ≥ −λ ·Pr(x,y)∼D[|〈w,x〉| ≥ T̄ ] . By definition of T̄ , it must be the case that ∫ T̄ 0 Ex∼Dx [(err(w,x) − λ + ζ)1|〈w,x〉|≥T ]dT ≥ 0. Therefore, L(w) 2 ≥ ∫ 1 T̄ Ex∼Dx [(err(w,x)− λ+ ζ)1|〈w,x〉|≥T ]dT ≥ −λ ·Pr(x,y)∼D[|〈w,x〉| ≥ T̄ ] , which implies that Pr(x,y)∼D[|〈w,x〉| ≥ T̄ ] ≥ |L(w)|2λ . This completes the proof of Lemma 2.5. Even though minimizing the convex proxy L does not lead to low misclassification error overall, Lemma 2.5 shows that there exists a region of non-trivial mass where it does. This region is identifiable by a simple threshold rule. We are now ready to prove Theorem 2.2. Proof of Theorem 2.2. We consider the steps of Algorithm 1 in each iteration of the while loop. At iteration i, we consider a distribution D(i) consisting only of points not handled in previous iterations. We start by noting that with high probability the total number of iterations is Õ(1/(γ )). This can be seen as follows: The empirical probability mass under D(i)m of the region {x : |〈w(i),x〉| ≥ T (i)} removed from S(i) to obtain S(i+1) is at least γ (Step 9). Since m = Õ(1/(γ2 4)), the DKW inequality [DKW56] implies that the true probability mass of this region is at least γ /2 with high probability. By a union bound over i ≤ K = Θ(log(1/ )/( γ)), it follows that with high probability we have that PrDx [S (i+1)] ≤ (1 − γ /2)i for all i ∈ [K]. After K iterations, we will have that PrDx [S (i+1)] ≤ /3. Step 3 guarantees that the mass of S(i) under D̃x is within an additive /3 of its mass under Dx, for i ∈ [K]. This implies that the loop terminates after at most K iterations. By Lemma 2.3 and the fact that every D(i) has margin γ, it follows that the minimizer of the loss L(i) has value less than −γ(λ − OPT(i)) ≤ −γ , as OPT(i) ≤ η and λ = η + . By the guarantees of Lemma 2.4, running SGD in line 7 on L(i)(·) with projection to the unit `2-ball for O ( log(1/δ)/(γ2 2) ) steps, we obtain a w(i) such that, with probability at least 1 − δ, it holds L(i)(w(i)) ≤ −γ /2 and ‖w(i)‖2 = 1. Here δ > 0 is a parameter that is selected so that the following claim holds: With probability at least 9/10, for all iterations i of the while loop we have that L(i)(w(i)) ≤ −γ /2. Since the total number of iterations is Õ(1/(γ )), setting δ to Ω̃( γ) and applying a union bound over all iterations gives the previous claim. Therefore, the total number of SGD steps per iteration is Õ(1/(γ2 2)). For a given iteration of the while loop, running SGD requires Õ(1/(γ2 2)) samples from D(i) which translate to at most Õ ( 1/(γ2 3) ) samples from D, as Prx∼Dx [ x ∈ S(i) ] ≥ 2 /3. Lemma 2.5 implies that there exists T ≥ 0 such that: (a) Pr(x,y)∼D(i) [|〈w,x〉| ≥ T ] ≥ γ , and (b)) Pr(x,y)∼D(i) [hw(x) 6= y ∣∣ |〈w,x〉| ≥ T ] ≤ η+ . Line 9 of Algorithm 1 estimates the threshold using samples. By the DKW inequality [DKW56], we know that with m = Õ(1/(γ2 4)) samples we can estimate the CDF within error γ 2 with probability 1− poly( , γ). This suffices to estimate the probability mass of the region within additive γ 2 and the misclassification error within /3. This is satisfied for all iterations with constant probability. In summary, with high constant success probability, Algorithm 1 runs for Õ(1/(γ )) iterations and draws Õ(1/(γ2 4)) samples per round for a total of Õ(1/(γ3 5)) samples. As each iteration runs in polynomial time, the total running time follows. When the while loop terminates, we have that Prx∼Dx [x ∈ S(i)] ≤ 4 /3, i.e., we will have accounted for at least a (1−4 /3)-fraction of the total probability mass. Since our algorithm achieves misclassification error at most η + 4 /3 in all the regions we accounted for, its total misclassification error is at most η + 8 /3. Rescaling by a constant factor gives Theorem 2.2. Remark 2.6. If the value of OPT is smaller than η − ξ for some value ξ > 0, Algorithm 1 gets misclassification error less than η −Ω(γ2ξ2) when run for = O(γ2ξ2). This is because, in the first iteration, L(1)(w(1)) ≤ −γ(λ−OPT)/2 ≤ −γξ/2, which implies, by Lemma 2.5, that the obtained error in S(1) is at most λ − γξ/4. The misclassification error in the remaining regions is at most λ+ , and region S(1) has probability mass at least γξ/4. Thus, the total misclassification error is at most λ+ − γ2ξ2/16 = η − Ω(γ2ξ2), when run for = O(γ2ξ2). 2.2 The General Case In the general case, we assume that Dx is an arbitrary distribution supported on b-bit integers. While such a distribution might have exponentially small margin in the dimension d (or even 0), we will preprocess the distribution to ensure a margin condition by removing outliers. We will require the following notion of an outlier: Definition 2.7 ([DV04a]). We call a point x in the support of a distribution Dx a β-outlier, if there exists a vector w ∈ Rd such that 〈w,x〉2 ≤ βEx∼Dx [〈w,x〉2]. We will use Theorem 3 of [DV04a], which shows that any distribution supported on b-bit integers can be efficiently preprocessed using samples so that no large outliers exist. Lemma 2.8 (Rephrasing of Theorem 3 of [DV04a]). Using m = Õ(d2b) samples from Dx, one can identify with high probability an ellipsoid E such that Prx∼Dx [x ∈ E] ≥ 12 and Dx|E has no Γ−1 = Õ(db)-outliers. Given this lemma, we can adapt Algorithm 1 for the large margin case to work in general. The pseudo-code is given in Algorithm 2. It similarly returns a decision list [(w(1), T (1), E(1)), (w(2), T (2), E(2)), · · · ] as output. Algorithm 2 Main Algorithm (general case) 1: Set S(1) = Rd, λ = η + , Γ−1 = Õ(db), m = Õ( 1Γ2 4 ). 2: Set i← 1. 3: Draw O ( (1/ 2) log(1/( Γ)) ) samples from Dx to form an empirical distribution D̃x. 4: while Prx∼D̃x [ x ∈ S(i) ] ≥ do 5: Run the algorithm of Lemma 2.8 to remove Γ−1-outliers from the distribution DS(i) by filtering points outside the ellipsoid E(i). 6: Let Σ(i) = E(x,y)∼D(i)| S(i) [xxT ] and set D(i) = ΓΣ(i)−1/2 · D|S(i)∩E(i) be the distribution D|S(i)∩E(i) brought in isotropic position and rescaled by Γ so that all vectors have `2-norm at most 1. 7: Let L(i)(w) = E(x,y)∼D(i) [LeakyReluλ(−y〈w,x〉)] 8: Run SGD on L(i)(w) for Õ(1/(Γ2 2)) iterations, to get w(i) with ‖w(i)‖2 = 1 such that L(i)(w(i)) ≤ minw:‖w‖2≤1 L(i)(w) + Γ /2. 9: Draw m samples from D(i) to form an empirical distribution D(i)m . 10: Find a threshold T (i) such that Pr (x,y)∼D(i)m [|〈w(i),x〉| ≥ T (i)] ≥ Γ and the empirical misclassification error, Pr (x,y)∼D(i)m [hw(x) 6= y ∣∣ |〈w(i),x〉| ≥ T (i)], is minimized. 11: Revert the linear transformation by setting w(i) ← ΓΣ(i)−1/2 ·w(i). 12: Update the unclassified region S(i+1) ← S(i) \ {x : x ∈ E(i) ∧ |〈w(i),x〉| ≥ T (i)} and set i← i+ 1. 13: Return the classifier [(w(1), T (1), E(1)), (w(2), T (2), E(2)), · · · ] Our main result is the following theorem: Theorem 2.9. LetD be a distribution over (d+1)-dimensional labeled examples with bit-complexity b, generated by an unknown halfspace corrupted by Massart noise at rate η < 1/2. Algorithm 2 uses Õ(d3b3/ 5) samples, runs in poly(d, 1/ , b) time, and returns, with probability 2/3, a classifier h with misclassification error errD0−1(h) ≤ η + . 3 Conclusions The main contribution of this paper is the first non-trivial learning algorithm for the class of halfspaces (or even disjunctions) in the distribution-free PAC model with Massart noise. Our algorithm achieves misclassification error η + in time poly(d, 1/ ), where η < 1/2 is an upper bound on the Massart noise rate. The most obvious open problem is whether this error guarantee can be improved to f(OPT) + (for some function f : R→ R such that limx→0 f(x) = 0) or, ideally, to OPT + . It follows from our lower bound constructions that such an improvement would require new algorithmic ideas. It is a plausible conjecture that obtaining better error guarantees is computationally intractable. This is left as an interesting open problem for future work. Another open question is whether there is an efficient proper learner matching the error guarantees of our algorithm. We believe that this is possible, building on the ideas in [DV04b], but we did not pursue this direction. More broadly, what other concept classes admit non-trivial algorithms in the Massart noise model? Can one establish non-trivial reductions between the Massart noise model and the agnostic model? And are there other natural semi-random input models that allow for efficient PAC learning algorithms in the distribution-free setting? Acknowledgments Part of this work was performed while Ilias Diakonikolas was at the Simons Institute for the Theory of Computing during the program on Foundations of Data Science. Ilias Diakonikolas is supported by Supported by NSF Award CCF-1652862 (CAREER) and a Sloan Research Fellowship. This research was performed while Themis Gouleakis was a postdoctoral researcher at USC.
1. What is the focus of the paper regarding learning with noise? 2. What are the strengths of the proposed algorithm, particularly in its application to halfspaces? 3. Are there any concerns or suggestions regarding the presentation of the material, such as explaining the equivalence of two definitions or providing more information about the relevance of the result to the broader field of learning theory? 4. How does the reviewer assess the significance and novelty of the paper's contribution in the context of prior works? 5. Are there any questions or issues that the reviewer raises but does not explicitly state, such as the possibility of proper learning halfspaces in the given model or the importance of considering the distinction between real-valued and Boolean domains?
Review
Review The paper gives a PAC learning algorithm for the basic problem of halfspaces in a model of learning with noise. The algorithm uses ideas from previous related results in the simpler model of random classification noise, with important new ideas. Learning with noise is a basic topic in learning theory. It can be argued that the most studied models (random misclassification noise and malicious noise) are unrealistically benign (even though the related SQ model is very important) or malicious, and there is a great need for the study of more realistic models. The Massart noise model is a candidate for such a model. As positive learnability results in the general PAC model were not known for this kind of noise, the result of the present paper is quite significant. The algorithm is non-proper, with a kind of decision list as hypothesis. This is especially interesting, as this class of decision lists is a natural class, which have already been studied (called neural decision lists, linear decision lists and threshold decision lists, going back to the work of Marchand, Golea and Rujan 30 years ago). It would be useful to comment on the possibility of proper learning halfspaces in this model. Comments: It is mentioned that an equivalent notion called ``malicious misclassification noise'' was studied (also 30 years ago) by Sloan. An explanation (or at least a reference) should be given for the equivalence of the two definitions. Malicious misclassification noise seems to be an appropriate term fitting the general terminology, and it seems that instead of using two names for the equivalent notions, one should just use malicious misclassification noise, noting that the other one is an equivalent definition. Massart noise is also unjustified as the cited paper is due to Massart and Nedelec. A small additional point is that presumably there is some ``tameness'' assumption for the noise probability function $\eta(x)$ (or not? does this matter for the equivalence proof?). A related comment: the literature review does not distinguish between the real-valued and Boolean domains (for example, Daniely's negative result already holds for the Boolean case); some comments on that should be added. One more terminological remark: the basic PAC model is distribution-independent, and for ``PAC learning under a fixed distribution'' is used when the underlying distribution is fixed. Thus the term ``distribution-independent'' in the title seems redundant (of course it is important to emphasize this feature as opposed to results on ``tame'' distributions in the text).
NIPS
Title Distribution-Independent PAC Learning of Halfspaces with Massart Noise Abstract We study the problem of distribution-independent PAC learning of halfspaces in the presence of Massart noise. Specifically, we are given a set of labeled examples (x, y) drawn from a distribution D on R such that the marginal distribution on the unlabeled points x is arbitrary and the labels y are generated by an unknown halfspace corrupted with Massart noise at noise rate η < 1/2. The goal is to find a hypothesis h that minimizes the misclassification error Pr(x,y)∼D [h(x) 6= y]. We give a poly (d, 1/ ) time algorithm for this problem with misclassification error η + . We also provide evidence that improving on the error guarantee of our algorithm might be computationally hard. Prior to our work, no efficient weak (distribution-independent) learner was known in this model, even for the class of disjunctions. The existence of such an algorithm for halfspaces (or even disjunctions) has been posed as an open question in various works, starting with Sloan (1988), Cohen (1997), and was most recently highlighted in Avrim Blum’s FOCS 2003 tutorial. 1 Introduction Halfspaces, or Linear Threshold Functions (henceforth LTFs), are Boolean functions f : Rd → {±1} of the form f(x) = sign(〈w,x〉 − θ), where w ∈ Rd is the weight vector and θ ∈ R is the threshold. (The function sign : R → {±1} is defined as sign(u) = 1 if u ≥ 0 and sign(u) = −1 otherwise.) The problem of learning an unknown halfspace is as old as the field of machine learning — starting with Rosenblatt’s Perceptron algorithm [Ros58] — and has arguably been the most influential problem in the development of the field. In the realizable setting, LTFs are known to be efficiently learnable in Valiant’s distribution-independent PAC model [Val84] via Linear Programming [MT94]. In the presence of corrupted data, the situation is more subtle and crucially depends on the underlying noise model. In the agnostic model [Hau92, KSS94] – where an adversary is allowed to arbitrarily corrupt an arbitrary η < 1/2 fraction of the labels – even weak learning is known to be computationally intractable [GR06, FGKP06, Dan16]. On the other hand, in the presence of Random Classification Noise (RCN) [AL88] – where each label is flipped independently with probability exactly η < 1/2 – a polynomial time algorithm is known [BFKV96, BFKV97]. In this work, we focus on learning halfspaces with Massart noise [MN06]: Definition 1.1 (Massart Noise Model). Let C be a class of Boolean functions over X = Rd, Dx be an arbitrary distribution over X , and 0 ≤ η < 1/2. Let f be an unknown target function in C. A noisy example oracle, EXMas(f,Dx, η), works as follows: Each time EXMas(f,Dx, η) is invoked, it 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. returns a labeled example (x, y), where x ∼ Dx, y = f(x) with probability 1−η(x) and y = −f(x) with probability η(x), for an unknown parameter η(x) ≤ η. Let D denote the joint distribution on (x, y) generated by the above oracle. A learning algorithm is given i.i.d. samples from D and its goal is to output a hypothesis h such that with high probability the error Pr(x,y)∼D[h(x) 6= y] is small. An equivalent formulation of the Massart model [Slo88, Slo92] is the following: With probability 1− η, we have that y = f(x), and with probability η the label y is controlled by an adversary. Hence, the Massart model lies in between the RCN and the agnostic models. (Note that the RCN model corresponds to the special case that η(x) = η for all x ∈ X .) It is well-known (see, e.g., [MN06]) that poly(d, 1/ ) samples information-theoretically suffice to compute a hypothesis with misclassification error OPT + , where OPT is the misclassification error of the optimal halfspace. Also note that OPT ≤ η by definition. The question is whether a polynomial time algorithm exists. The existence of an efficient distribution-independent learning algorithm for halfspaces (or even disjunctions) in the Massart model has been posed as an open question in a number of works. In the first COLT conference [Slo88] (see also [Slo92]), Sloan defined the malicious misclassification noise model (an equivalent formulation of Massart noise, described above) and asked whether there exists an efficient learning algorithm for disjunctions in this model. About a decade later, Cohen [Coh97] asked the same question for the more general class of all LTFs. The question remained open — even for weak learning of disjunctions! — and was highlighted in Avrim Blum’s FOCS 2003 tutorial [Blu03]. Specifically, prior to this work, even the following very basic special case remained open: Given labeled examples from an unknown disjunction, corrupted with 1% Massart noise, can we efficiently find a hypothesis that achieves misclassification error 49%? The reader is referred to slides 39-40 of Avrim Blum’s FOCS’03 tutorial [Blu03], where it is suggested that the above problem might be easier than agnostically learning disjunctions. As a corollary of our main result (Theorem 1.2), we answer this question in the affirmative. In particular, we obtain an efficient algorithm that achieves misclassification error arbitrarily close to η for all LTFs. 1.1 Our Results The main result of this paper is the following: Theorem 1.2 (Main Result). There is an algorithm that for all 0 < η < 1/2, on input a set of i.i.d. examples from a distribution D = EXMas(f,Dx, η) on Rd+1, where f is an unknown halfspace on Rd, it runs in poly(d, b, 1/ ) time, where b is an upper bound on the bit complexity of the examples, and outputs a hypothesis h that with high probability satisfies Pr(x,y)∼D[h(x) 6= y] ≤ η + . See Theorem 2.9 for a more detailed formal statement. For large-margin halfspaces, we obtain a slightly better error guarantee; see Theorem 2.2 and Remark 2.6. Discussion. We note that our algorithm is non-proper, i.e., the hypothesis h itself is not a halfspace. The polynomial dependence on b in the runtime cannot be removed, even in the noiseless case, unless one obtains strongly-polynomial algorithms for linear programming. Finally, we note that the misclassification error of η translates to error 2η + with respect to the target LTF. Our algorithm gives error η + , instead of the information-theoretic optimum of OPT + . To complement our positive result, we provide some evidence that improving on our (η + ) error guarantee may be challenging. Roughly speaking, we show (see Theorems B.1 and B.2 in the supplementary material) that natural approaches — involving convex surrogates and refinements thereof — inherently fail, even under margin assumptions. (See Section 1.2 for a discussion.) Broader Context. This work is part of the broader agenda of designing robust estimators in the distribution-independent setting with respect to natural noise models. A recent line of work [KLS09, ABL17, DKK+16, LRV16, DKK+17, DKK+18, DKS18, KKM18, DKS19, DKK+19] has given efficient robust estimators for a range of learning tasks (both supervised and unsupervised) in the presence of a small constant fraction of adversarial corruptions. A limitation of these results is the assumption that the good data comes from a “tame” distribution, e.g., Gaussian or isotropic log-concave distribution. On the other hand, if no assumption is made on the good data and the noise remains fully adversarial, these problems become computationally intractable [Ber06, GR06, Dan16]. This suggests the following general question: Are there realistic noise models that allow for efficient algorithms without imposing (strong) assumptions on the good data? Conceptually, the algorithmic results of this paper could be viewed as an affirmative answer to this question for the problem of learning halfspaces. 1.2 Technical Overview In this section, we provide an outline of our approach and a comparison to previous techniques. Since the distribution on the unlabeled data is arbitrary, we can assume w.l.o.g. that the threshold θ = 0. Massart Noise versus RCN. Random Classification Noise (RCN) [AL88] is the special case of Massart noise where each label is flipped with probability exactly η < 1/2. At first glance, it might seem that Massart noise is easier to deal with computationally than RCN. After all, in the Massart model we add at most as much noise as in the RCN model. It turns out that this intuition is fundamentally flawed. Roughly speaking, the ability of the Massart adversary to choose whether to perturb a given label and, if so, with what probability (which is unknown to the learner), makes the design of efficient algorithms in this model challenging. In particular, the well-known connection between learning with RCN and the Statistical Query (SQ) model [Kea93, Kea98] no longer holds, i.e., the property of being an SQ algorithm does not automatically suffice for noise-tolerant learning with Massart noise. We note that this connection with the SQ model is leveraged in [BFKV96, BFKV97] to obtain their polynomial time algorithm for learning halfspaces with RCN. Large Margin Halfspaces. To illustrate our approach, we start by describing our learning algorithm for γ-margin halfspaces on the unit ball. That is, we assume |〈w∗,x〉| ≥ γ for every x in the support, where w∗ ∈ Rd with ‖w∗‖2 = 1 defines the target halfspace hw∗(x) = sign(〈w∗,x〉). Our goal is to design a poly(d, 1/ , 1/γ) time learning algorithm in the presence of Massart noise. In the RCN model, the large margin case is easy because the learning problem is essentially convex. That is, there is a convex surrogate that allows us to formulate the problem as a convex program. We can use SGD to find a near-optimal solution to this convex program, which automatically gives a strong proper learner. This simple fact does not appear explicitly in the literature, but follows easily from standard tools. [Byl94] showed that a variant of the Perceptron algorithm (which can be viewed as gradient descent on a particular convex objective) learns γ-margin halfspaces in poly(d, 1/ , 1/γ) time. The algorithm in [Byl94] requires an additional anti-concentration condition about the distribution, which is easy to remove. In Appendix C, we show that a “smoothed” version of Bylander’s objective suffices as a convex surrogate under only the margin assumption. Roughly speaking, the reason that a convex surrogate works for RCN is that the expected effect of the noise on each label is known a priori. Unfortunately, this is not the case for Massart noise. We show (Theorem B.1 in Appendix B) that no convex surrogate can lead to a weak learner, even under a margin assumption. That is, if ŵ is the minimizer of G(w) = E(x,y)∼D[φ(y〈w,x〉)], where φ can be any convex function, then the hypothesis sign(〈ŵ,x〉) is not even a weak learner. So, in sharp contrast with the RCN case, the problem is non-convex in this sense. Our Massart learning algorithm for large margin halfspaces still uses a convex surrogate, but in a qualitatively different way. Instead of attempting to solve the problem in one-shot, our algorithm adaptively applies a sequence of convex optimization problems to obtain an accurate solution in disjoint subsets of the space. Our iterative approach is motivated by a new structural lemma (Lemma 2.5) establishing the following: Even though minimizing a convex proxy does not lead to small misclassification error over the entire space, there exists a region with non-trivial probability mass where it does. Moreover, this region is efficiently identifiable by a simple thresholding rule. Specifically, we show that there exists a threshold T > 0 (which can be found algorithmically) such that the hypothesis sign(〈ŵ,x〉) has error bounded by η + in the region RT = {x : |〈ŵ,x〉| ≥ T}. Here ŵ is any near-optimal solution to an appropriate convex optimization problem, defined via a convex surrogate objective similar to the one used in [Byl94]. We note that Lemma 2.5 is the main technical novelty of this paper and motivates our algorithm. Given Lemma 2.5, in any iteration i we can find the best threshold T (i) using samples, and obtain a learner with misclassification error η + in the corresponding region. Since each region has non-trivial mass, iterating this scheme a small number of times allows us to find a non-proper hypothesis (a decision-list of halfspaces) with misclassification error at most η + in the entire space. The idea of iteratively optimizing a convex surrogate was used in [BFKV96] to learn halfspaces with RCN without a margin. Despite this similarity, we note that the algorithm of [BFKV96] fails to even obtain a weak learner in the Massart model. We point out two crucial technical differences: First, the iterative approach in [BFKV96] was needed to achieve polynomial running time. As mentioned already, a convex proxy is guaranteed to converge to the true solution with RCN, but the convergence may be too slow (when the margin is tiny). In contrast, with Massart noise (even under a margin condition) convex surrogates cannot even give weak learning in the entire domain. Second, the algorithm of [BFKV96] used a fixed threshold in each iteration, equal to the margin parameter obtained after an appropriate pre-processing of the data (that is needed in order to ensure a weak margin property). In contrast, in our setting, we need to find an appropriate threshold T (i) in each iteration i, according to the criterion specified by our Lemma 2.5. General Case. Our algorithm for the general case (in the absence of a margin) is qualitatively similar to our algorithm for the large margin case, but the details are more elaborate. We borrow an idea from [BFKV96] that in some sense allows us to “reduce” the general case to the large margin case. Specifically, [BFKV96] (see also [DV04a]) developed a pre-processing routine that slightly modifies the distribution on the unlabeled points and guarantees the following weak margin property: After preprocessing, there exists an explicit margin parameter σ = Ω(1/poly(d, b)), such that any hyperplane through the origin has at least a non-trivial mass of the distribution at distance at least σ from it. Using this pre-processing step, we are able to adapt our algorithm from the previous subsection to work without margin assumptions in poly(d, b, 1/ ) time. While our analysis is similar in spirit to the case of large margin, we note that the margin property obtained via the [BFKV96, DV04a] preprocessing step is (necessarily) weaker, hence additional careful analysis is required. Lower Bounds Against Natural Approaches. We have already explained our Theorem B.1, which shows that using a convex surrogate over the entire space cannot not give a weak learner. Our algorithm, however, can achieve error η + by iteratively optimizing a specific convex surrogate in disjoint subsets of the domain. A natural question is whether one can obtain qualitatively better accuracy, e.g., f(OPT)+ , by using a different convex objective function in our iterative thresholding approach. We show (Theorem B.2) that such an improvement is not possible: Using a different convex proxy cannot lead to error better than (1− o(1)) · η. It is a plausible conjecture that improving on the error guarantee of our algorithm is computationally hard. We leave this as an intriguing open problem for future work. 1.3 Prior and Related Work Bylander [Byl94] gave a polynomial time algorithm to learn large margin halfspaces with RCN (under an additional anti-concentration assumption). The work of Blum et al. [BFKV96, BFKV97] gave the first polynomial time algorithm for distribution-independent learning of halfspaces with RCN without any margin assumptions. Soon thereafter, [Coh97] gave a polynomial-time proper learning algorithm for the problem. Subsequently, Dunagan and Vempala [DV04b] gave a rescaled perceptron algorithm for solving linear programs, which translates to a significantly simpler and faster proper learning algorithm. The term “Massart noise” was coined after [MN06]. An equivalent version of the model was previously studied by Rivest and Sloan [Slo88, Slo92, RS94, Slo96], and a very similar asymmetric random noise model goes back to Vapnik [Vap82]. Prior to this work, essentially no efficient algorithms with non-trivial error guarantees were known in the distribution-free Massart noise model. It should be noted that polynomial time algorithms with error OPT+ are known [ABHU15, ZLC17, YZ17] when the marginal distribution on the unlabeled data is uniform on the unit sphere. For the case that the unlabeled data comes from an isotropic log-concave distribution, [ABHZ16] give a d2 poly(1/(1−2η)) /poly( ) sample and time algorithm. 1.4 Preliminaries For n ∈ Z+, we denote [n] def = {1, . . . , n}. We will use small boldface characters for vectors and we let ei denote the i-th vector of an orthonormal basis. For x ∈ Rd, and i ∈ [d], xi denotes the i-th coordinate of x, and ‖x‖2 def = ( ∑d i=1 x 2 i ) 1/2 denotes the `2-norm of x. We will use 〈x,y〉 for the inner product between x,y ∈ Rd. We will use E[X] for the expectation of random variable X and Pr[E ] for the probability of event E . An origin-centered halfspace is a Boolean-valued function hw : Rd → {±1} of the form hw(x) = sign (〈w,x〉), where w ∈ Rd. (Note that we may assume w.l.o.g. that ‖w‖2 = 1.) We denote byHd the class of all origin-centered halfspaces on Rd. We consider a classification problem where labeled examples (x, y) are drawn i.i.d. from a distributionD. We denote byDx the marginal ofD on x, and for any x denoteDy(x) the distribution of y conditional on x. Our goal is to find a hypothesis classifier h with low misclassification error. We will denote the misclassification error of a hypothesis h with respect to D by errD0−1(h) = Pr(x,y)∼D[h(x) 6= y]. Let OPT = minh∈Hd errD0−1(h) denote the optimal misclassification error of any halfspace, and w∗ be the normal vector to a halfspace hw∗ that achieves this. 2 Algorithm for Learning Halfspaces with Massart Noise In this section, we present the main result of this paper, which is an efficient algorithm that achieves η + misclassification error for distribution-independent learning of halfspaces with Massart noise η. Our algorithm uses (stochastic) gradient descent on a convex proxy function L(w) for the misclassification error to identify a region with small misclassification error. The loss function penalizes the points which are misclassified by the threshold function hw, proportionally to the distance from the corresponding hyperplane, while rewards the correctly classified points at a smaller rate. Directly optimizing this convex objective does not lead to a separator with low error, but guarantees that for a non-negligible fraction of the mass away from the separating hyperplane the misclassification error will be at most η + . Classifying points in this region according to the hyperplane and recursively working on the remaining points, we obtain an improper learning algorithm that achieves η + error overall. We now develop some necessary notation before proceeding with the description and analysis of our algorithm. Our algorithm considers the following convex proxy for the misclassification error as a function of the weight vector w: L(w) = E (x,y)∼D [LeakyReluλ(−y〈w,x〉)] , under the constraint ‖w‖2 ≤ 1, where LeakyReluλ(z) = { (1− λ)z if z ≥ 0 λz if z < 0 and λ is the leakage parameter, which we will set to be λ ≈ η. We define the per-point misclassification error and the error of the proxy function as err(w,x) = Pry∼Dy(x)[w(x) 6= y] and `(w,x) = Ey∼Dy(x)[LeakyReluλ(−y〈w,x〉)] respectively. Notice that errD0−1(hw) = Ex∼Dx [err(w,x)] and L(w) = Ex∼Dx [`(w,x)]. Moreover, OPT = Ex∼Dx [err(w ∗,x)] = Ex∼Dx [η(x)]. Relationship between proxy loss and misclassification error We first relate the proxy loss and the misclassification error. Claim 2.1. For any w,x, we have that `(w,x) = (err(w,x)− λ)|〈w,x〉|. Proof. We consider two cases: • Case sign(〈w,x〉) = sign(〈w∗,x〉): In this case, we have that err(w,x) = η(x), while `(w,x) = η(x)(1− λ)|〈w,x〉| − (1− η(x))λ|〈w,x〉| = (η(x)− λ)|〈w,x〉|. • Case sign(〈w,x〉) 6= sign(〈w∗,x〉): In this case, we have that err(w,x) = 1 − η(x), while `(w,x) = (1− η(x))(1− λ)|〈w,x〉| − η(x)λ|〈w,x〉| = (1− η(x)− λ)|〈w,x〉|. This completes the proof of Claim 2.1. Claim 2.1 shows that minimizing Ex∼Dx [ `(w,x) |〈w,x〉| ] is equivalent to minimizing the misclassification error. Unfortunately, this objective is hard to minimize as it is non-convex, but one would hope that minimizing L(w) instead may have a similar effect. As we show, this is not true because |〈w,x〉| might vary significantly across points, and in fact it is not possible to use a convex proxy that achieves bounded misclassification error directly. Our algorithm circumvents this difficulty by approaching the problem indirectly to find a nonproper classifier. Specifically, our algorithm works in multiple rounds, where within each round only points with high value of |〈w,x〉| are considered. The intuition is based on the fact that the approximation of the convex proxy to the misclassification error is more accurate for those points that have comparable distance to the halfspace. In Section 2.1, we handle the large margin case and in Section 2.2 we handle the general case. 2.1 Warm-up: Learning Large Margin Halfspaces We consider the case that there is no probability mass within distance γ from the separating hyperplane 〈w∗,x〉 = 0, ‖w∗‖2 = 1. Formally, assume that for every x ∼ Dx, ‖x‖2 ≤ 1 and that 〈w∗,x〉 ≥ γ. The pseudo-code of our algorithm is given in Algorithm 1. Our algorithm returns a decision list [(w(1), T (1)), (w(2), T (2)), · · · ] as output. To classify a point x given the decision list, the first i is identified such that |〈w(i),x〉| ≥ T (i) and sign(〈w(i),x〉) is returned. If no such i exists, an arbitrary prediction is returned. Algorithm 1 Main Algorithm (with margin) 1: Set S(1) = Rd, λ = η + , m = Õ( 1γ2 4 ). 2: Set i← 1. 3: Draw O ( (1/ 2) log(1/( γ)) ) samples from Dx to form an empirical distribution D̃x. 4: while Prx∼D̃x [ x ∈ S(i) ] ≥ do 5: Set D(i) = D|S(i) , the distribution conditional on the unclassified points. 6: Let L(i)(w) = E(x,y)∼D(i) [LeakyReluλ(−y〈w,x〉)] 7: Run SGD on L(i)(w) for Õ(1/(γ2 2)) iterations to get w(i) with ‖w(i)‖2 = 1 such that L(i)(w(i)) ≤ minw:‖w‖2≤1 L(i)(w) + γ /2. 8: Draw m samples from D(i) to form an empirical distribution D(i)m . 9: Find a threshold T (i) such that Pr (x,y)∼D(i)m [|〈w(i),x〉| ≥ T (i)] ≥ γ and the empirical misclassification error, Pr (x,y)∼D(i)m [hw(i)(x) 6= y ∣∣ |〈w(i),x〉| ≥ T (i)], is minimized. 10: Update the unclassified region S(i+1) ← S(i) \ {x : |〈w(i),x〉| ≥ T (i)} and set i← i+ 1. 11: Return the classifier [(w(1), T (1)), (w(2), T (2)), · · · ] The main result of this section is the following: Theorem 2.2. Let D be a distribution on Bd × {±1} such that Dx satisfies the γ-margin property with respect to w∗ and y is generated by sign(〈w∗,x〉) corrupted with Massart noise at rate η < 1/2. Algorithm 1 uses Õ(1/(γ3 5)) samples from D, runs in poly(d, 1/ , 1/γ) time, and returns, with probability 2/3, a classifier h with misclassification error errD0−1(h) ≤ η + . Our analysis focuses on a single iteration of Algorithm 1. We will show that a large fraction of the points is classified at every iteration within error η + . To achieve this, we analyze the convex objective L. We start by showing that the optimal classifier w∗ obtains a significantly negative objective value. Lemma 2.3. If λ ≥ η, then L(w∗) ≤ −γ(λ−OPT). Proof. For any fixed x, using Claim 2.1, we have that `(w∗,x) = (err(w∗,x)− λ)|〈w∗,x〉| = (η(x)− λ)|〈w∗,x〉| ≤ −γ(λ− η(x)) , since |〈w∗,x〉| ≥ γ and η(x)− λ ≤ 0. Taking expectation over x ∼ Dx, the statement follows. Lemma 2.3 is the only place where the Massart noise assumption is used in our approach and establishes that points with sufficiently negative value exist. As we will show, any weight vector w with this property can be found with few samples and must accurately classify some region of non-negligible mass away from it (Lemma 2.5). We now argue that we can use stochastic gradient descent (SGD) to efficiently identify a point w that achieves comparably small objective value to the guarantee of Lemma 2.3. We use the following standard property of SGD: Lemma 2.4 (see, e.g., Theorem 3.4.11 in [Duc16]). Let L be any convex function. Consider the (projected) SGD iteration that is initialized at w(0) = 0 and for every step computes w(t+ 1 2 ) = w(t) − ρv(t) and w(t+1) = arg min w:‖w‖2≤1 ∥∥∥w −w(t+ 12 )∥∥∥ 2 , where v(t) is a stochastic gradient such that for all steps E[v(t)|w(t)] ∈ ∂L(w(t)) and ∥∥v(t)∥∥ 2 ≤ 1. Assume that SGD is run for T iterations with step size ρ = 1√ T and let w̄ = 1T ∑T t=1 w (t). Then, for any , δ > 0, after T = Ω(log(1/δ)/ 2) iterations with probability with probability at least 1− δ we have that L(w̄) ≤ minw:‖w‖2≤1 L(w) + . By Lemma 2.3, we know that minw:‖w‖2≤1 L(w) ≤ −γ(λ−OPT). By Lemma 2.4, it follows that by running SGD on L(w) with projection to the unit `2-ball for O ( log(1/δ)/(γ2(λ−OPT)2) ) steps, we find a w such that L(w) ≤ −γ(λ−OPT)/2 with probability at least 1− δ. Note that we can assume without loss of generality that ‖w‖2 = 1, as increasing the magnitude of w only decreases the objective value. We now consider the misclassification error of the halfspace hw conditional on the points that are further than some distance T from the separating hyperplane. We claim that there exists a threshold T > 0 where the restriction has non-trivial mass and the conditional misclassification error is small: Lemma 2.5. Consider a vector w with L(w) < 0. There exists a threshold T ≥ 0 such that (i) Pr(x,y)∼D[|〈w,x〉| ≥ T ] ≥ |L(w)|2λ , and (ii) Pr(x,y)∼D[hw(x) 6= y ∣∣ |〈w,x〉| ≥ T ] ≤ λ− |L(w)|2 . Proof. We will show there is a T ≥ 0 such that Pr(x,y)∼D[hw(x) 6= y ∣∣ |〈w,x〉| ≥ T ] ≤ λ − ζ, where ζ def= |L(w)|/2, or equivalently, Ex∼Dx [(err(w,x)− λ+ ζ)1|〈w,x〉|≥T ] ≤ 0. For a T drawn uniformly at random in [0, 1], we have that:∫ 1 0 E x∼Dx [(err(w,x)− λ+ ζ)1|〈w,x〉|≥T ]dT = Ex∼Dx [(err(w,x)− λ)|〈w,x〉|] + ζEx∼Dx [|〈w,x〉|] ≤ Ex∼Dx [`(w,x)] + ζ = L(w) + ζ = L(w)/2 < 0 . Thus, there exists a T̄ such that Ex∼Dx [(err(w,x)−λ+ ζ)1|〈w,x〉|≥T̄ ] ≤ 0. Consider the minimum such T̄ . Then we have∫ 1 T̄ Ex∼Dx [(err(w,x)− λ+ ζ)1|〈w,x〉|≥T ]dT ≥ −λ ·Pr(x,y)∼D[|〈w,x〉| ≥ T̄ ] . By definition of T̄ , it must be the case that ∫ T̄ 0 Ex∼Dx [(err(w,x) − λ + ζ)1|〈w,x〉|≥T ]dT ≥ 0. Therefore, L(w) 2 ≥ ∫ 1 T̄ Ex∼Dx [(err(w,x)− λ+ ζ)1|〈w,x〉|≥T ]dT ≥ −λ ·Pr(x,y)∼D[|〈w,x〉| ≥ T̄ ] , which implies that Pr(x,y)∼D[|〈w,x〉| ≥ T̄ ] ≥ |L(w)|2λ . This completes the proof of Lemma 2.5. Even though minimizing the convex proxy L does not lead to low misclassification error overall, Lemma 2.5 shows that there exists a region of non-trivial mass where it does. This region is identifiable by a simple threshold rule. We are now ready to prove Theorem 2.2. Proof of Theorem 2.2. We consider the steps of Algorithm 1 in each iteration of the while loop. At iteration i, we consider a distribution D(i) consisting only of points not handled in previous iterations. We start by noting that with high probability the total number of iterations is Õ(1/(γ )). This can be seen as follows: The empirical probability mass under D(i)m of the region {x : |〈w(i),x〉| ≥ T (i)} removed from S(i) to obtain S(i+1) is at least γ (Step 9). Since m = Õ(1/(γ2 4)), the DKW inequality [DKW56] implies that the true probability mass of this region is at least γ /2 with high probability. By a union bound over i ≤ K = Θ(log(1/ )/( γ)), it follows that with high probability we have that PrDx [S (i+1)] ≤ (1 − γ /2)i for all i ∈ [K]. After K iterations, we will have that PrDx [S (i+1)] ≤ /3. Step 3 guarantees that the mass of S(i) under D̃x is within an additive /3 of its mass under Dx, for i ∈ [K]. This implies that the loop terminates after at most K iterations. By Lemma 2.3 and the fact that every D(i) has margin γ, it follows that the minimizer of the loss L(i) has value less than −γ(λ − OPT(i)) ≤ −γ , as OPT(i) ≤ η and λ = η + . By the guarantees of Lemma 2.4, running SGD in line 7 on L(i)(·) with projection to the unit `2-ball for O ( log(1/δ)/(γ2 2) ) steps, we obtain a w(i) such that, with probability at least 1 − δ, it holds L(i)(w(i)) ≤ −γ /2 and ‖w(i)‖2 = 1. Here δ > 0 is a parameter that is selected so that the following claim holds: With probability at least 9/10, for all iterations i of the while loop we have that L(i)(w(i)) ≤ −γ /2. Since the total number of iterations is Õ(1/(γ )), setting δ to Ω̃( γ) and applying a union bound over all iterations gives the previous claim. Therefore, the total number of SGD steps per iteration is Õ(1/(γ2 2)). For a given iteration of the while loop, running SGD requires Õ(1/(γ2 2)) samples from D(i) which translate to at most Õ ( 1/(γ2 3) ) samples from D, as Prx∼Dx [ x ∈ S(i) ] ≥ 2 /3. Lemma 2.5 implies that there exists T ≥ 0 such that: (a) Pr(x,y)∼D(i) [|〈w,x〉| ≥ T ] ≥ γ , and (b)) Pr(x,y)∼D(i) [hw(x) 6= y ∣∣ |〈w,x〉| ≥ T ] ≤ η+ . Line 9 of Algorithm 1 estimates the threshold using samples. By the DKW inequality [DKW56], we know that with m = Õ(1/(γ2 4)) samples we can estimate the CDF within error γ 2 with probability 1− poly( , γ). This suffices to estimate the probability mass of the region within additive γ 2 and the misclassification error within /3. This is satisfied for all iterations with constant probability. In summary, with high constant success probability, Algorithm 1 runs for Õ(1/(γ )) iterations and draws Õ(1/(γ2 4)) samples per round for a total of Õ(1/(γ3 5)) samples. As each iteration runs in polynomial time, the total running time follows. When the while loop terminates, we have that Prx∼Dx [x ∈ S(i)] ≤ 4 /3, i.e., we will have accounted for at least a (1−4 /3)-fraction of the total probability mass. Since our algorithm achieves misclassification error at most η + 4 /3 in all the regions we accounted for, its total misclassification error is at most η + 8 /3. Rescaling by a constant factor gives Theorem 2.2. Remark 2.6. If the value of OPT is smaller than η − ξ for some value ξ > 0, Algorithm 1 gets misclassification error less than η −Ω(γ2ξ2) when run for = O(γ2ξ2). This is because, in the first iteration, L(1)(w(1)) ≤ −γ(λ−OPT)/2 ≤ −γξ/2, which implies, by Lemma 2.5, that the obtained error in S(1) is at most λ − γξ/4. The misclassification error in the remaining regions is at most λ+ , and region S(1) has probability mass at least γξ/4. Thus, the total misclassification error is at most λ+ − γ2ξ2/16 = η − Ω(γ2ξ2), when run for = O(γ2ξ2). 2.2 The General Case In the general case, we assume that Dx is an arbitrary distribution supported on b-bit integers. While such a distribution might have exponentially small margin in the dimension d (or even 0), we will preprocess the distribution to ensure a margin condition by removing outliers. We will require the following notion of an outlier: Definition 2.7 ([DV04a]). We call a point x in the support of a distribution Dx a β-outlier, if there exists a vector w ∈ Rd such that 〈w,x〉2 ≤ βEx∼Dx [〈w,x〉2]. We will use Theorem 3 of [DV04a], which shows that any distribution supported on b-bit integers can be efficiently preprocessed using samples so that no large outliers exist. Lemma 2.8 (Rephrasing of Theorem 3 of [DV04a]). Using m = Õ(d2b) samples from Dx, one can identify with high probability an ellipsoid E such that Prx∼Dx [x ∈ E] ≥ 12 and Dx|E has no Γ−1 = Õ(db)-outliers. Given this lemma, we can adapt Algorithm 1 for the large margin case to work in general. The pseudo-code is given in Algorithm 2. It similarly returns a decision list [(w(1), T (1), E(1)), (w(2), T (2), E(2)), · · · ] as output. Algorithm 2 Main Algorithm (general case) 1: Set S(1) = Rd, λ = η + , Γ−1 = Õ(db), m = Õ( 1Γ2 4 ). 2: Set i← 1. 3: Draw O ( (1/ 2) log(1/( Γ)) ) samples from Dx to form an empirical distribution D̃x. 4: while Prx∼D̃x [ x ∈ S(i) ] ≥ do 5: Run the algorithm of Lemma 2.8 to remove Γ−1-outliers from the distribution DS(i) by filtering points outside the ellipsoid E(i). 6: Let Σ(i) = E(x,y)∼D(i)| S(i) [xxT ] and set D(i) = ΓΣ(i)−1/2 · D|S(i)∩E(i) be the distribution D|S(i)∩E(i) brought in isotropic position and rescaled by Γ so that all vectors have `2-norm at most 1. 7: Let L(i)(w) = E(x,y)∼D(i) [LeakyReluλ(−y〈w,x〉)] 8: Run SGD on L(i)(w) for Õ(1/(Γ2 2)) iterations, to get w(i) with ‖w(i)‖2 = 1 such that L(i)(w(i)) ≤ minw:‖w‖2≤1 L(i)(w) + Γ /2. 9: Draw m samples from D(i) to form an empirical distribution D(i)m . 10: Find a threshold T (i) such that Pr (x,y)∼D(i)m [|〈w(i),x〉| ≥ T (i)] ≥ Γ and the empirical misclassification error, Pr (x,y)∼D(i)m [hw(x) 6= y ∣∣ |〈w(i),x〉| ≥ T (i)], is minimized. 11: Revert the linear transformation by setting w(i) ← ΓΣ(i)−1/2 ·w(i). 12: Update the unclassified region S(i+1) ← S(i) \ {x : x ∈ E(i) ∧ |〈w(i),x〉| ≥ T (i)} and set i← i+ 1. 13: Return the classifier [(w(1), T (1), E(1)), (w(2), T (2), E(2)), · · · ] Our main result is the following theorem: Theorem 2.9. LetD be a distribution over (d+1)-dimensional labeled examples with bit-complexity b, generated by an unknown halfspace corrupted by Massart noise at rate η < 1/2. Algorithm 2 uses Õ(d3b3/ 5) samples, runs in poly(d, 1/ , b) time, and returns, with probability 2/3, a classifier h with misclassification error errD0−1(h) ≤ η + . 3 Conclusions The main contribution of this paper is the first non-trivial learning algorithm for the class of halfspaces (or even disjunctions) in the distribution-free PAC model with Massart noise. Our algorithm achieves misclassification error η + in time poly(d, 1/ ), where η < 1/2 is an upper bound on the Massart noise rate. The most obvious open problem is whether this error guarantee can be improved to f(OPT) + (for some function f : R→ R such that limx→0 f(x) = 0) or, ideally, to OPT + . It follows from our lower bound constructions that such an improvement would require new algorithmic ideas. It is a plausible conjecture that obtaining better error guarantees is computationally intractable. This is left as an interesting open problem for future work. Another open question is whether there is an efficient proper learner matching the error guarantees of our algorithm. We believe that this is possible, building on the ideas in [DV04b], but we did not pursue this direction. More broadly, what other concept classes admit non-trivial algorithms in the Massart noise model? Can one establish non-trivial reductions between the Massart noise model and the agnostic model? And are there other natural semi-random input models that allow for efficient PAC learning algorithms in the distribution-free setting? Acknowledgments Part of this work was performed while Ilias Diakonikolas was at the Simons Institute for the Theory of Computing during the program on Foundations of Data Science. Ilias Diakonikolas is supported by Supported by NSF Award CCF-1652862 (CAREER) and a Sloan Research Fellowship. This research was performed while Themis Gouleakis was a postdoctoral researcher at USC.
1. What is the focus of the paper regarding learning halfspaces? 2. What are the strengths of the proposed approach, particularly in its ability to handle different types of noise models? 3. How does the reviewer assess the significance of the main result and its contribution to advancing the state of the art? 4. Can you provide more details about the proof of Lemma 2.5 and how it leads to a decision list of halfspaces? 5. How does the paper's method compare to prior works, such as the celebrated result of BFKV'97, in terms of its noise model and time complexity?
Review
Review This paper studies the problem of learning halfspaces under arbitrary data distributions and when label noise is present in the data. This problem has a rich history and the celebrated result of [BFKV'97] showed that there exists a polynomial time learning algorithm when the label noise is i.i.d., i.e., when each label is flipped independently with probability eta < 1/2. Essentially this is the only noise model for which we know distribution independent learning results. At the other extreme we have the agnostic learning model where we know that learning halfspaces under the uniform/log-concave distributions is easy and there is also evidence that agnostic learning under arbitrary distributions is hard. An intermediate noise model is the Massart/bounded noise model, where the label of each example x is flipped independently with probability p_x < eta < 1/2. Even for this model it has been a longstanding open problem as to whether one can design a learning algorithm that for any eps>0, achieves error OPT + eps, where OPT is the error of the best halfspace. Before this paper it was known how to achieve this only for uniform/log-concave distributions. The main result of the paper is that for learning halfspaces under arbitrary distributions under Massart noise, one can achieve error eta+ eps in polynomial time. This is a significant advance over the state of the art. The paper shows that one can construct a decision list of halfspaces to achieve this bound. The main insight in achieving this is Lemma 2.5 that states that under Massart noise, if data distribuition has some non-trivial margin, then by minimizing a convex proxy one will end up with a halfspce w that does well on a non-trivial amount of the data distribution. Furthermore, this space can be identified by simply thresholding |w.x| at a certain value T. Once this is proved, one immediately obtains learning algorithm for large margin distributions by simply repeating the process on the distribution that does not fall within the threshold. For the general case, one can use the idea of [BFKV] to preprocess the data so that a large fraction satisfies good margin and then apply lemma 2.5. I very much enjoyed reading the paper, it makes progress on a long standing open problem and will lead to further theoretical work in the area. This is a very strong submission and I absolutely recommend acceptance.
NIPS
Title Distribution-Independent PAC Learning of Halfspaces with Massart Noise Abstract We study the problem of distribution-independent PAC learning of halfspaces in the presence of Massart noise. Specifically, we are given a set of labeled examples (x, y) drawn from a distribution D on R such that the marginal distribution on the unlabeled points x is arbitrary and the labels y are generated by an unknown halfspace corrupted with Massart noise at noise rate η < 1/2. The goal is to find a hypothesis h that minimizes the misclassification error Pr(x,y)∼D [h(x) 6= y]. We give a poly (d, 1/ ) time algorithm for this problem with misclassification error η + . We also provide evidence that improving on the error guarantee of our algorithm might be computationally hard. Prior to our work, no efficient weak (distribution-independent) learner was known in this model, even for the class of disjunctions. The existence of such an algorithm for halfspaces (or even disjunctions) has been posed as an open question in various works, starting with Sloan (1988), Cohen (1997), and was most recently highlighted in Avrim Blum’s FOCS 2003 tutorial. 1 Introduction Halfspaces, or Linear Threshold Functions (henceforth LTFs), are Boolean functions f : Rd → {±1} of the form f(x) = sign(〈w,x〉 − θ), where w ∈ Rd is the weight vector and θ ∈ R is the threshold. (The function sign : R → {±1} is defined as sign(u) = 1 if u ≥ 0 and sign(u) = −1 otherwise.) The problem of learning an unknown halfspace is as old as the field of machine learning — starting with Rosenblatt’s Perceptron algorithm [Ros58] — and has arguably been the most influential problem in the development of the field. In the realizable setting, LTFs are known to be efficiently learnable in Valiant’s distribution-independent PAC model [Val84] via Linear Programming [MT94]. In the presence of corrupted data, the situation is more subtle and crucially depends on the underlying noise model. In the agnostic model [Hau92, KSS94] – where an adversary is allowed to arbitrarily corrupt an arbitrary η < 1/2 fraction of the labels – even weak learning is known to be computationally intractable [GR06, FGKP06, Dan16]. On the other hand, in the presence of Random Classification Noise (RCN) [AL88] – where each label is flipped independently with probability exactly η < 1/2 – a polynomial time algorithm is known [BFKV96, BFKV97]. In this work, we focus on learning halfspaces with Massart noise [MN06]: Definition 1.1 (Massart Noise Model). Let C be a class of Boolean functions over X = Rd, Dx be an arbitrary distribution over X , and 0 ≤ η < 1/2. Let f be an unknown target function in C. A noisy example oracle, EXMas(f,Dx, η), works as follows: Each time EXMas(f,Dx, η) is invoked, it 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. returns a labeled example (x, y), where x ∼ Dx, y = f(x) with probability 1−η(x) and y = −f(x) with probability η(x), for an unknown parameter η(x) ≤ η. Let D denote the joint distribution on (x, y) generated by the above oracle. A learning algorithm is given i.i.d. samples from D and its goal is to output a hypothesis h such that with high probability the error Pr(x,y)∼D[h(x) 6= y] is small. An equivalent formulation of the Massart model [Slo88, Slo92] is the following: With probability 1− η, we have that y = f(x), and with probability η the label y is controlled by an adversary. Hence, the Massart model lies in between the RCN and the agnostic models. (Note that the RCN model corresponds to the special case that η(x) = η for all x ∈ X .) It is well-known (see, e.g., [MN06]) that poly(d, 1/ ) samples information-theoretically suffice to compute a hypothesis with misclassification error OPT + , where OPT is the misclassification error of the optimal halfspace. Also note that OPT ≤ η by definition. The question is whether a polynomial time algorithm exists. The existence of an efficient distribution-independent learning algorithm for halfspaces (or even disjunctions) in the Massart model has been posed as an open question in a number of works. In the first COLT conference [Slo88] (see also [Slo92]), Sloan defined the malicious misclassification noise model (an equivalent formulation of Massart noise, described above) and asked whether there exists an efficient learning algorithm for disjunctions in this model. About a decade later, Cohen [Coh97] asked the same question for the more general class of all LTFs. The question remained open — even for weak learning of disjunctions! — and was highlighted in Avrim Blum’s FOCS 2003 tutorial [Blu03]. Specifically, prior to this work, even the following very basic special case remained open: Given labeled examples from an unknown disjunction, corrupted with 1% Massart noise, can we efficiently find a hypothesis that achieves misclassification error 49%? The reader is referred to slides 39-40 of Avrim Blum’s FOCS’03 tutorial [Blu03], where it is suggested that the above problem might be easier than agnostically learning disjunctions. As a corollary of our main result (Theorem 1.2), we answer this question in the affirmative. In particular, we obtain an efficient algorithm that achieves misclassification error arbitrarily close to η for all LTFs. 1.1 Our Results The main result of this paper is the following: Theorem 1.2 (Main Result). There is an algorithm that for all 0 < η < 1/2, on input a set of i.i.d. examples from a distribution D = EXMas(f,Dx, η) on Rd+1, where f is an unknown halfspace on Rd, it runs in poly(d, b, 1/ ) time, where b is an upper bound on the bit complexity of the examples, and outputs a hypothesis h that with high probability satisfies Pr(x,y)∼D[h(x) 6= y] ≤ η + . See Theorem 2.9 for a more detailed formal statement. For large-margin halfspaces, we obtain a slightly better error guarantee; see Theorem 2.2 and Remark 2.6. Discussion. We note that our algorithm is non-proper, i.e., the hypothesis h itself is not a halfspace. The polynomial dependence on b in the runtime cannot be removed, even in the noiseless case, unless one obtains strongly-polynomial algorithms for linear programming. Finally, we note that the misclassification error of η translates to error 2η + with respect to the target LTF. Our algorithm gives error η + , instead of the information-theoretic optimum of OPT + . To complement our positive result, we provide some evidence that improving on our (η + ) error guarantee may be challenging. Roughly speaking, we show (see Theorems B.1 and B.2 in the supplementary material) that natural approaches — involving convex surrogates and refinements thereof — inherently fail, even under margin assumptions. (See Section 1.2 for a discussion.) Broader Context. This work is part of the broader agenda of designing robust estimators in the distribution-independent setting with respect to natural noise models. A recent line of work [KLS09, ABL17, DKK+16, LRV16, DKK+17, DKK+18, DKS18, KKM18, DKS19, DKK+19] has given efficient robust estimators for a range of learning tasks (both supervised and unsupervised) in the presence of a small constant fraction of adversarial corruptions. A limitation of these results is the assumption that the good data comes from a “tame” distribution, e.g., Gaussian or isotropic log-concave distribution. On the other hand, if no assumption is made on the good data and the noise remains fully adversarial, these problems become computationally intractable [Ber06, GR06, Dan16]. This suggests the following general question: Are there realistic noise models that allow for efficient algorithms without imposing (strong) assumptions on the good data? Conceptually, the algorithmic results of this paper could be viewed as an affirmative answer to this question for the problem of learning halfspaces. 1.2 Technical Overview In this section, we provide an outline of our approach and a comparison to previous techniques. Since the distribution on the unlabeled data is arbitrary, we can assume w.l.o.g. that the threshold θ = 0. Massart Noise versus RCN. Random Classification Noise (RCN) [AL88] is the special case of Massart noise where each label is flipped with probability exactly η < 1/2. At first glance, it might seem that Massart noise is easier to deal with computationally than RCN. After all, in the Massart model we add at most as much noise as in the RCN model. It turns out that this intuition is fundamentally flawed. Roughly speaking, the ability of the Massart adversary to choose whether to perturb a given label and, if so, with what probability (which is unknown to the learner), makes the design of efficient algorithms in this model challenging. In particular, the well-known connection between learning with RCN and the Statistical Query (SQ) model [Kea93, Kea98] no longer holds, i.e., the property of being an SQ algorithm does not automatically suffice for noise-tolerant learning with Massart noise. We note that this connection with the SQ model is leveraged in [BFKV96, BFKV97] to obtain their polynomial time algorithm for learning halfspaces with RCN. Large Margin Halfspaces. To illustrate our approach, we start by describing our learning algorithm for γ-margin halfspaces on the unit ball. That is, we assume |〈w∗,x〉| ≥ γ for every x in the support, where w∗ ∈ Rd with ‖w∗‖2 = 1 defines the target halfspace hw∗(x) = sign(〈w∗,x〉). Our goal is to design a poly(d, 1/ , 1/γ) time learning algorithm in the presence of Massart noise. In the RCN model, the large margin case is easy because the learning problem is essentially convex. That is, there is a convex surrogate that allows us to formulate the problem as a convex program. We can use SGD to find a near-optimal solution to this convex program, which automatically gives a strong proper learner. This simple fact does not appear explicitly in the literature, but follows easily from standard tools. [Byl94] showed that a variant of the Perceptron algorithm (which can be viewed as gradient descent on a particular convex objective) learns γ-margin halfspaces in poly(d, 1/ , 1/γ) time. The algorithm in [Byl94] requires an additional anti-concentration condition about the distribution, which is easy to remove. In Appendix C, we show that a “smoothed” version of Bylander’s objective suffices as a convex surrogate under only the margin assumption. Roughly speaking, the reason that a convex surrogate works for RCN is that the expected effect of the noise on each label is known a priori. Unfortunately, this is not the case for Massart noise. We show (Theorem B.1 in Appendix B) that no convex surrogate can lead to a weak learner, even under a margin assumption. That is, if ŵ is the minimizer of G(w) = E(x,y)∼D[φ(y〈w,x〉)], where φ can be any convex function, then the hypothesis sign(〈ŵ,x〉) is not even a weak learner. So, in sharp contrast with the RCN case, the problem is non-convex in this sense. Our Massart learning algorithm for large margin halfspaces still uses a convex surrogate, but in a qualitatively different way. Instead of attempting to solve the problem in one-shot, our algorithm adaptively applies a sequence of convex optimization problems to obtain an accurate solution in disjoint subsets of the space. Our iterative approach is motivated by a new structural lemma (Lemma 2.5) establishing the following: Even though minimizing a convex proxy does not lead to small misclassification error over the entire space, there exists a region with non-trivial probability mass where it does. Moreover, this region is efficiently identifiable by a simple thresholding rule. Specifically, we show that there exists a threshold T > 0 (which can be found algorithmically) such that the hypothesis sign(〈ŵ,x〉) has error bounded by η + in the region RT = {x : |〈ŵ,x〉| ≥ T}. Here ŵ is any near-optimal solution to an appropriate convex optimization problem, defined via a convex surrogate objective similar to the one used in [Byl94]. We note that Lemma 2.5 is the main technical novelty of this paper and motivates our algorithm. Given Lemma 2.5, in any iteration i we can find the best threshold T (i) using samples, and obtain a learner with misclassification error η + in the corresponding region. Since each region has non-trivial mass, iterating this scheme a small number of times allows us to find a non-proper hypothesis (a decision-list of halfspaces) with misclassification error at most η + in the entire space. The idea of iteratively optimizing a convex surrogate was used in [BFKV96] to learn halfspaces with RCN without a margin. Despite this similarity, we note that the algorithm of [BFKV96] fails to even obtain a weak learner in the Massart model. We point out two crucial technical differences: First, the iterative approach in [BFKV96] was needed to achieve polynomial running time. As mentioned already, a convex proxy is guaranteed to converge to the true solution with RCN, but the convergence may be too slow (when the margin is tiny). In contrast, with Massart noise (even under a margin condition) convex surrogates cannot even give weak learning in the entire domain. Second, the algorithm of [BFKV96] used a fixed threshold in each iteration, equal to the margin parameter obtained after an appropriate pre-processing of the data (that is needed in order to ensure a weak margin property). In contrast, in our setting, we need to find an appropriate threshold T (i) in each iteration i, according to the criterion specified by our Lemma 2.5. General Case. Our algorithm for the general case (in the absence of a margin) is qualitatively similar to our algorithm for the large margin case, but the details are more elaborate. We borrow an idea from [BFKV96] that in some sense allows us to “reduce” the general case to the large margin case. Specifically, [BFKV96] (see also [DV04a]) developed a pre-processing routine that slightly modifies the distribution on the unlabeled points and guarantees the following weak margin property: After preprocessing, there exists an explicit margin parameter σ = Ω(1/poly(d, b)), such that any hyperplane through the origin has at least a non-trivial mass of the distribution at distance at least σ from it. Using this pre-processing step, we are able to adapt our algorithm from the previous subsection to work without margin assumptions in poly(d, b, 1/ ) time. While our analysis is similar in spirit to the case of large margin, we note that the margin property obtained via the [BFKV96, DV04a] preprocessing step is (necessarily) weaker, hence additional careful analysis is required. Lower Bounds Against Natural Approaches. We have already explained our Theorem B.1, which shows that using a convex surrogate over the entire space cannot not give a weak learner. Our algorithm, however, can achieve error η + by iteratively optimizing a specific convex surrogate in disjoint subsets of the domain. A natural question is whether one can obtain qualitatively better accuracy, e.g., f(OPT)+ , by using a different convex objective function in our iterative thresholding approach. We show (Theorem B.2) that such an improvement is not possible: Using a different convex proxy cannot lead to error better than (1− o(1)) · η. It is a plausible conjecture that improving on the error guarantee of our algorithm is computationally hard. We leave this as an intriguing open problem for future work. 1.3 Prior and Related Work Bylander [Byl94] gave a polynomial time algorithm to learn large margin halfspaces with RCN (under an additional anti-concentration assumption). The work of Blum et al. [BFKV96, BFKV97] gave the first polynomial time algorithm for distribution-independent learning of halfspaces with RCN without any margin assumptions. Soon thereafter, [Coh97] gave a polynomial-time proper learning algorithm for the problem. Subsequently, Dunagan and Vempala [DV04b] gave a rescaled perceptron algorithm for solving linear programs, which translates to a significantly simpler and faster proper learning algorithm. The term “Massart noise” was coined after [MN06]. An equivalent version of the model was previously studied by Rivest and Sloan [Slo88, Slo92, RS94, Slo96], and a very similar asymmetric random noise model goes back to Vapnik [Vap82]. Prior to this work, essentially no efficient algorithms with non-trivial error guarantees were known in the distribution-free Massart noise model. It should be noted that polynomial time algorithms with error OPT+ are known [ABHU15, ZLC17, YZ17] when the marginal distribution on the unlabeled data is uniform on the unit sphere. For the case that the unlabeled data comes from an isotropic log-concave distribution, [ABHZ16] give a d2 poly(1/(1−2η)) /poly( ) sample and time algorithm. 1.4 Preliminaries For n ∈ Z+, we denote [n] def = {1, . . . , n}. We will use small boldface characters for vectors and we let ei denote the i-th vector of an orthonormal basis. For x ∈ Rd, and i ∈ [d], xi denotes the i-th coordinate of x, and ‖x‖2 def = ( ∑d i=1 x 2 i ) 1/2 denotes the `2-norm of x. We will use 〈x,y〉 for the inner product between x,y ∈ Rd. We will use E[X] for the expectation of random variable X and Pr[E ] for the probability of event E . An origin-centered halfspace is a Boolean-valued function hw : Rd → {±1} of the form hw(x) = sign (〈w,x〉), where w ∈ Rd. (Note that we may assume w.l.o.g. that ‖w‖2 = 1.) We denote byHd the class of all origin-centered halfspaces on Rd. We consider a classification problem where labeled examples (x, y) are drawn i.i.d. from a distributionD. We denote byDx the marginal ofD on x, and for any x denoteDy(x) the distribution of y conditional on x. Our goal is to find a hypothesis classifier h with low misclassification error. We will denote the misclassification error of a hypothesis h with respect to D by errD0−1(h) = Pr(x,y)∼D[h(x) 6= y]. Let OPT = minh∈Hd errD0−1(h) denote the optimal misclassification error of any halfspace, and w∗ be the normal vector to a halfspace hw∗ that achieves this. 2 Algorithm for Learning Halfspaces with Massart Noise In this section, we present the main result of this paper, which is an efficient algorithm that achieves η + misclassification error for distribution-independent learning of halfspaces with Massart noise η. Our algorithm uses (stochastic) gradient descent on a convex proxy function L(w) for the misclassification error to identify a region with small misclassification error. The loss function penalizes the points which are misclassified by the threshold function hw, proportionally to the distance from the corresponding hyperplane, while rewards the correctly classified points at a smaller rate. Directly optimizing this convex objective does not lead to a separator with low error, but guarantees that for a non-negligible fraction of the mass away from the separating hyperplane the misclassification error will be at most η + . Classifying points in this region according to the hyperplane and recursively working on the remaining points, we obtain an improper learning algorithm that achieves η + error overall. We now develop some necessary notation before proceeding with the description and analysis of our algorithm. Our algorithm considers the following convex proxy for the misclassification error as a function of the weight vector w: L(w) = E (x,y)∼D [LeakyReluλ(−y〈w,x〉)] , under the constraint ‖w‖2 ≤ 1, where LeakyReluλ(z) = { (1− λ)z if z ≥ 0 λz if z < 0 and λ is the leakage parameter, which we will set to be λ ≈ η. We define the per-point misclassification error and the error of the proxy function as err(w,x) = Pry∼Dy(x)[w(x) 6= y] and `(w,x) = Ey∼Dy(x)[LeakyReluλ(−y〈w,x〉)] respectively. Notice that errD0−1(hw) = Ex∼Dx [err(w,x)] and L(w) = Ex∼Dx [`(w,x)]. Moreover, OPT = Ex∼Dx [err(w ∗,x)] = Ex∼Dx [η(x)]. Relationship between proxy loss and misclassification error We first relate the proxy loss and the misclassification error. Claim 2.1. For any w,x, we have that `(w,x) = (err(w,x)− λ)|〈w,x〉|. Proof. We consider two cases: • Case sign(〈w,x〉) = sign(〈w∗,x〉): In this case, we have that err(w,x) = η(x), while `(w,x) = η(x)(1− λ)|〈w,x〉| − (1− η(x))λ|〈w,x〉| = (η(x)− λ)|〈w,x〉|. • Case sign(〈w,x〉) 6= sign(〈w∗,x〉): In this case, we have that err(w,x) = 1 − η(x), while `(w,x) = (1− η(x))(1− λ)|〈w,x〉| − η(x)λ|〈w,x〉| = (1− η(x)− λ)|〈w,x〉|. This completes the proof of Claim 2.1. Claim 2.1 shows that minimizing Ex∼Dx [ `(w,x) |〈w,x〉| ] is equivalent to minimizing the misclassification error. Unfortunately, this objective is hard to minimize as it is non-convex, but one would hope that minimizing L(w) instead may have a similar effect. As we show, this is not true because |〈w,x〉| might vary significantly across points, and in fact it is not possible to use a convex proxy that achieves bounded misclassification error directly. Our algorithm circumvents this difficulty by approaching the problem indirectly to find a nonproper classifier. Specifically, our algorithm works in multiple rounds, where within each round only points with high value of |〈w,x〉| are considered. The intuition is based on the fact that the approximation of the convex proxy to the misclassification error is more accurate for those points that have comparable distance to the halfspace. In Section 2.1, we handle the large margin case and in Section 2.2 we handle the general case. 2.1 Warm-up: Learning Large Margin Halfspaces We consider the case that there is no probability mass within distance γ from the separating hyperplane 〈w∗,x〉 = 0, ‖w∗‖2 = 1. Formally, assume that for every x ∼ Dx, ‖x‖2 ≤ 1 and that 〈w∗,x〉 ≥ γ. The pseudo-code of our algorithm is given in Algorithm 1. Our algorithm returns a decision list [(w(1), T (1)), (w(2), T (2)), · · · ] as output. To classify a point x given the decision list, the first i is identified such that |〈w(i),x〉| ≥ T (i) and sign(〈w(i),x〉) is returned. If no such i exists, an arbitrary prediction is returned. Algorithm 1 Main Algorithm (with margin) 1: Set S(1) = Rd, λ = η + , m = Õ( 1γ2 4 ). 2: Set i← 1. 3: Draw O ( (1/ 2) log(1/( γ)) ) samples from Dx to form an empirical distribution D̃x. 4: while Prx∼D̃x [ x ∈ S(i) ] ≥ do 5: Set D(i) = D|S(i) , the distribution conditional on the unclassified points. 6: Let L(i)(w) = E(x,y)∼D(i) [LeakyReluλ(−y〈w,x〉)] 7: Run SGD on L(i)(w) for Õ(1/(γ2 2)) iterations to get w(i) with ‖w(i)‖2 = 1 such that L(i)(w(i)) ≤ minw:‖w‖2≤1 L(i)(w) + γ /2. 8: Draw m samples from D(i) to form an empirical distribution D(i)m . 9: Find a threshold T (i) such that Pr (x,y)∼D(i)m [|〈w(i),x〉| ≥ T (i)] ≥ γ and the empirical misclassification error, Pr (x,y)∼D(i)m [hw(i)(x) 6= y ∣∣ |〈w(i),x〉| ≥ T (i)], is minimized. 10: Update the unclassified region S(i+1) ← S(i) \ {x : |〈w(i),x〉| ≥ T (i)} and set i← i+ 1. 11: Return the classifier [(w(1), T (1)), (w(2), T (2)), · · · ] The main result of this section is the following: Theorem 2.2. Let D be a distribution on Bd × {±1} such that Dx satisfies the γ-margin property with respect to w∗ and y is generated by sign(〈w∗,x〉) corrupted with Massart noise at rate η < 1/2. Algorithm 1 uses Õ(1/(γ3 5)) samples from D, runs in poly(d, 1/ , 1/γ) time, and returns, with probability 2/3, a classifier h with misclassification error errD0−1(h) ≤ η + . Our analysis focuses on a single iteration of Algorithm 1. We will show that a large fraction of the points is classified at every iteration within error η + . To achieve this, we analyze the convex objective L. We start by showing that the optimal classifier w∗ obtains a significantly negative objective value. Lemma 2.3. If λ ≥ η, then L(w∗) ≤ −γ(λ−OPT). Proof. For any fixed x, using Claim 2.1, we have that `(w∗,x) = (err(w∗,x)− λ)|〈w∗,x〉| = (η(x)− λ)|〈w∗,x〉| ≤ −γ(λ− η(x)) , since |〈w∗,x〉| ≥ γ and η(x)− λ ≤ 0. Taking expectation over x ∼ Dx, the statement follows. Lemma 2.3 is the only place where the Massart noise assumption is used in our approach and establishes that points with sufficiently negative value exist. As we will show, any weight vector w with this property can be found with few samples and must accurately classify some region of non-negligible mass away from it (Lemma 2.5). We now argue that we can use stochastic gradient descent (SGD) to efficiently identify a point w that achieves comparably small objective value to the guarantee of Lemma 2.3. We use the following standard property of SGD: Lemma 2.4 (see, e.g., Theorem 3.4.11 in [Duc16]). Let L be any convex function. Consider the (projected) SGD iteration that is initialized at w(0) = 0 and for every step computes w(t+ 1 2 ) = w(t) − ρv(t) and w(t+1) = arg min w:‖w‖2≤1 ∥∥∥w −w(t+ 12 )∥∥∥ 2 , where v(t) is a stochastic gradient such that for all steps E[v(t)|w(t)] ∈ ∂L(w(t)) and ∥∥v(t)∥∥ 2 ≤ 1. Assume that SGD is run for T iterations with step size ρ = 1√ T and let w̄ = 1T ∑T t=1 w (t). Then, for any , δ > 0, after T = Ω(log(1/δ)/ 2) iterations with probability with probability at least 1− δ we have that L(w̄) ≤ minw:‖w‖2≤1 L(w) + . By Lemma 2.3, we know that minw:‖w‖2≤1 L(w) ≤ −γ(λ−OPT). By Lemma 2.4, it follows that by running SGD on L(w) with projection to the unit `2-ball for O ( log(1/δ)/(γ2(λ−OPT)2) ) steps, we find a w such that L(w) ≤ −γ(λ−OPT)/2 with probability at least 1− δ. Note that we can assume without loss of generality that ‖w‖2 = 1, as increasing the magnitude of w only decreases the objective value. We now consider the misclassification error of the halfspace hw conditional on the points that are further than some distance T from the separating hyperplane. We claim that there exists a threshold T > 0 where the restriction has non-trivial mass and the conditional misclassification error is small: Lemma 2.5. Consider a vector w with L(w) < 0. There exists a threshold T ≥ 0 such that (i) Pr(x,y)∼D[|〈w,x〉| ≥ T ] ≥ |L(w)|2λ , and (ii) Pr(x,y)∼D[hw(x) 6= y ∣∣ |〈w,x〉| ≥ T ] ≤ λ− |L(w)|2 . Proof. We will show there is a T ≥ 0 such that Pr(x,y)∼D[hw(x) 6= y ∣∣ |〈w,x〉| ≥ T ] ≤ λ − ζ, where ζ def= |L(w)|/2, or equivalently, Ex∼Dx [(err(w,x)− λ+ ζ)1|〈w,x〉|≥T ] ≤ 0. For a T drawn uniformly at random in [0, 1], we have that:∫ 1 0 E x∼Dx [(err(w,x)− λ+ ζ)1|〈w,x〉|≥T ]dT = Ex∼Dx [(err(w,x)− λ)|〈w,x〉|] + ζEx∼Dx [|〈w,x〉|] ≤ Ex∼Dx [`(w,x)] + ζ = L(w) + ζ = L(w)/2 < 0 . Thus, there exists a T̄ such that Ex∼Dx [(err(w,x)−λ+ ζ)1|〈w,x〉|≥T̄ ] ≤ 0. Consider the minimum such T̄ . Then we have∫ 1 T̄ Ex∼Dx [(err(w,x)− λ+ ζ)1|〈w,x〉|≥T ]dT ≥ −λ ·Pr(x,y)∼D[|〈w,x〉| ≥ T̄ ] . By definition of T̄ , it must be the case that ∫ T̄ 0 Ex∼Dx [(err(w,x) − λ + ζ)1|〈w,x〉|≥T ]dT ≥ 0. Therefore, L(w) 2 ≥ ∫ 1 T̄ Ex∼Dx [(err(w,x)− λ+ ζ)1|〈w,x〉|≥T ]dT ≥ −λ ·Pr(x,y)∼D[|〈w,x〉| ≥ T̄ ] , which implies that Pr(x,y)∼D[|〈w,x〉| ≥ T̄ ] ≥ |L(w)|2λ . This completes the proof of Lemma 2.5. Even though minimizing the convex proxy L does not lead to low misclassification error overall, Lemma 2.5 shows that there exists a region of non-trivial mass where it does. This region is identifiable by a simple threshold rule. We are now ready to prove Theorem 2.2. Proof of Theorem 2.2. We consider the steps of Algorithm 1 in each iteration of the while loop. At iteration i, we consider a distribution D(i) consisting only of points not handled in previous iterations. We start by noting that with high probability the total number of iterations is Õ(1/(γ )). This can be seen as follows: The empirical probability mass under D(i)m of the region {x : |〈w(i),x〉| ≥ T (i)} removed from S(i) to obtain S(i+1) is at least γ (Step 9). Since m = Õ(1/(γ2 4)), the DKW inequality [DKW56] implies that the true probability mass of this region is at least γ /2 with high probability. By a union bound over i ≤ K = Θ(log(1/ )/( γ)), it follows that with high probability we have that PrDx [S (i+1)] ≤ (1 − γ /2)i for all i ∈ [K]. After K iterations, we will have that PrDx [S (i+1)] ≤ /3. Step 3 guarantees that the mass of S(i) under D̃x is within an additive /3 of its mass under Dx, for i ∈ [K]. This implies that the loop terminates after at most K iterations. By Lemma 2.3 and the fact that every D(i) has margin γ, it follows that the minimizer of the loss L(i) has value less than −γ(λ − OPT(i)) ≤ −γ , as OPT(i) ≤ η and λ = η + . By the guarantees of Lemma 2.4, running SGD in line 7 on L(i)(·) with projection to the unit `2-ball for O ( log(1/δ)/(γ2 2) ) steps, we obtain a w(i) such that, with probability at least 1 − δ, it holds L(i)(w(i)) ≤ −γ /2 and ‖w(i)‖2 = 1. Here δ > 0 is a parameter that is selected so that the following claim holds: With probability at least 9/10, for all iterations i of the while loop we have that L(i)(w(i)) ≤ −γ /2. Since the total number of iterations is Õ(1/(γ )), setting δ to Ω̃( γ) and applying a union bound over all iterations gives the previous claim. Therefore, the total number of SGD steps per iteration is Õ(1/(γ2 2)). For a given iteration of the while loop, running SGD requires Õ(1/(γ2 2)) samples from D(i) which translate to at most Õ ( 1/(γ2 3) ) samples from D, as Prx∼Dx [ x ∈ S(i) ] ≥ 2 /3. Lemma 2.5 implies that there exists T ≥ 0 such that: (a) Pr(x,y)∼D(i) [|〈w,x〉| ≥ T ] ≥ γ , and (b)) Pr(x,y)∼D(i) [hw(x) 6= y ∣∣ |〈w,x〉| ≥ T ] ≤ η+ . Line 9 of Algorithm 1 estimates the threshold using samples. By the DKW inequality [DKW56], we know that with m = Õ(1/(γ2 4)) samples we can estimate the CDF within error γ 2 with probability 1− poly( , γ). This suffices to estimate the probability mass of the region within additive γ 2 and the misclassification error within /3. This is satisfied for all iterations with constant probability. In summary, with high constant success probability, Algorithm 1 runs for Õ(1/(γ )) iterations and draws Õ(1/(γ2 4)) samples per round for a total of Õ(1/(γ3 5)) samples. As each iteration runs in polynomial time, the total running time follows. When the while loop terminates, we have that Prx∼Dx [x ∈ S(i)] ≤ 4 /3, i.e., we will have accounted for at least a (1−4 /3)-fraction of the total probability mass. Since our algorithm achieves misclassification error at most η + 4 /3 in all the regions we accounted for, its total misclassification error is at most η + 8 /3. Rescaling by a constant factor gives Theorem 2.2. Remark 2.6. If the value of OPT is smaller than η − ξ for some value ξ > 0, Algorithm 1 gets misclassification error less than η −Ω(γ2ξ2) when run for = O(γ2ξ2). This is because, in the first iteration, L(1)(w(1)) ≤ −γ(λ−OPT)/2 ≤ −γξ/2, which implies, by Lemma 2.5, that the obtained error in S(1) is at most λ − γξ/4. The misclassification error in the remaining regions is at most λ+ , and region S(1) has probability mass at least γξ/4. Thus, the total misclassification error is at most λ+ − γ2ξ2/16 = η − Ω(γ2ξ2), when run for = O(γ2ξ2). 2.2 The General Case In the general case, we assume that Dx is an arbitrary distribution supported on b-bit integers. While such a distribution might have exponentially small margin in the dimension d (or even 0), we will preprocess the distribution to ensure a margin condition by removing outliers. We will require the following notion of an outlier: Definition 2.7 ([DV04a]). We call a point x in the support of a distribution Dx a β-outlier, if there exists a vector w ∈ Rd such that 〈w,x〉2 ≤ βEx∼Dx [〈w,x〉2]. We will use Theorem 3 of [DV04a], which shows that any distribution supported on b-bit integers can be efficiently preprocessed using samples so that no large outliers exist. Lemma 2.8 (Rephrasing of Theorem 3 of [DV04a]). Using m = Õ(d2b) samples from Dx, one can identify with high probability an ellipsoid E such that Prx∼Dx [x ∈ E] ≥ 12 and Dx|E has no Γ−1 = Õ(db)-outliers. Given this lemma, we can adapt Algorithm 1 for the large margin case to work in general. The pseudo-code is given in Algorithm 2. It similarly returns a decision list [(w(1), T (1), E(1)), (w(2), T (2), E(2)), · · · ] as output. Algorithm 2 Main Algorithm (general case) 1: Set S(1) = Rd, λ = η + , Γ−1 = Õ(db), m = Õ( 1Γ2 4 ). 2: Set i← 1. 3: Draw O ( (1/ 2) log(1/( Γ)) ) samples from Dx to form an empirical distribution D̃x. 4: while Prx∼D̃x [ x ∈ S(i) ] ≥ do 5: Run the algorithm of Lemma 2.8 to remove Γ−1-outliers from the distribution DS(i) by filtering points outside the ellipsoid E(i). 6: Let Σ(i) = E(x,y)∼D(i)| S(i) [xxT ] and set D(i) = ΓΣ(i)−1/2 · D|S(i)∩E(i) be the distribution D|S(i)∩E(i) brought in isotropic position and rescaled by Γ so that all vectors have `2-norm at most 1. 7: Let L(i)(w) = E(x,y)∼D(i) [LeakyReluλ(−y〈w,x〉)] 8: Run SGD on L(i)(w) for Õ(1/(Γ2 2)) iterations, to get w(i) with ‖w(i)‖2 = 1 such that L(i)(w(i)) ≤ minw:‖w‖2≤1 L(i)(w) + Γ /2. 9: Draw m samples from D(i) to form an empirical distribution D(i)m . 10: Find a threshold T (i) such that Pr (x,y)∼D(i)m [|〈w(i),x〉| ≥ T (i)] ≥ Γ and the empirical misclassification error, Pr (x,y)∼D(i)m [hw(x) 6= y ∣∣ |〈w(i),x〉| ≥ T (i)], is minimized. 11: Revert the linear transformation by setting w(i) ← ΓΣ(i)−1/2 ·w(i). 12: Update the unclassified region S(i+1) ← S(i) \ {x : x ∈ E(i) ∧ |〈w(i),x〉| ≥ T (i)} and set i← i+ 1. 13: Return the classifier [(w(1), T (1), E(1)), (w(2), T (2), E(2)), · · · ] Our main result is the following theorem: Theorem 2.9. LetD be a distribution over (d+1)-dimensional labeled examples with bit-complexity b, generated by an unknown halfspace corrupted by Massart noise at rate η < 1/2. Algorithm 2 uses Õ(d3b3/ 5) samples, runs in poly(d, 1/ , b) time, and returns, with probability 2/3, a classifier h with misclassification error errD0−1(h) ≤ η + . 3 Conclusions The main contribution of this paper is the first non-trivial learning algorithm for the class of halfspaces (or even disjunctions) in the distribution-free PAC model with Massart noise. Our algorithm achieves misclassification error η + in time poly(d, 1/ ), where η < 1/2 is an upper bound on the Massart noise rate. The most obvious open problem is whether this error guarantee can be improved to f(OPT) + (for some function f : R→ R such that limx→0 f(x) = 0) or, ideally, to OPT + . It follows from our lower bound constructions that such an improvement would require new algorithmic ideas. It is a plausible conjecture that obtaining better error guarantees is computationally intractable. This is left as an interesting open problem for future work. Another open question is whether there is an efficient proper learner matching the error guarantees of our algorithm. We believe that this is possible, building on the ideas in [DV04b], but we did not pursue this direction. More broadly, what other concept classes admit non-trivial algorithms in the Massart noise model? Can one establish non-trivial reductions between the Massart noise model and the agnostic model? And are there other natural semi-random input models that allow for efficient PAC learning algorithms in the distribution-free setting? Acknowledgments Part of this work was performed while Ilias Diakonikolas was at the Simons Institute for the Theory of Computing during the program on Foundations of Data Science. Ilias Diakonikolas is supported by Supported by NSF Award CCF-1652862 (CAREER) and a Sloan Research Fellowship. This research was performed while Themis Gouleakis was a postdoctoral researcher at USC.
1. What is the main contribution of the paper regarding distribution-free PAC learning? 2. What are the strengths of the paper, particularly in its techniques and organization? 3. Do you have any questions or concerns regarding the paper's statements and proofs? 4. How does the reviewer assess the significance of the paper's resolution of the open question? 5. Are there any issues with the algorithm and its application in the paper?
Review
Review This paper provides an efficient algorithm for distribution-free PAC learning of halfspaces under Massart noise. This resolves a long-standing open question, at least for representations with a bounded bit-complexity. The paper is very well-written; the techniques are interesting and well explained, and the organization is good. In several points, there are sloppy statements which are incorrect as stated, though they all seem fixable. The detailed comments below list these issues. I request that the authors address these issues in their response, as well as fix the final version of this submission. The main observation exploited in this work is that while optimizing a convex surrogate cannot achieve the required error in this case (which the authors also prove), there does exist a convex surrogate that achieves a small error on some non-negligible region of the space. Repeatedly minimizing this convex surrogate on the remaining part of the space obtains a low error on the entire space, using improper learning with a decision list of half-spaces. The paper presents the solution to learning half-spaces with a margin and Massart noise, and later generalizes the solution to half-spaces without a margin, using the finite bit-complexity and an additional preprocessing step. The proofs in the body of the paper seem correct, except for some fixable issues listed below. This paper addresses an important and interesting question, and resolves it with an elegant and well-explained technique. Assuming all the small issues are fixed, I strongly recommend that this paper be accepted. Detailed comments: ~~~~~~~~~~~~~~~~~~ 1. The dependence of the solution for the zero-margin case in the bit-complexity of the representation is not revealed until section 1.1. Since the paper claims to resolve an open problem, it is important to discuss the relationship between the original statement of the open problem and the actual solution. 2. In Alg 1, there seems to be an assumption that the marginal distribution is completely available (line 3), otherwise an estimation process is needed here. Please explain. 3. Page 6, lines 249-252 ignore the fact that the guarantee of lemma 2.4 is on expectation only. This is then addressed in page 7, lines 276-279 but it is too late, as the paragraph on page 6 is incorrect as stated. Please unite these two paragraphs and put these on page 6. 4. Proof of lemma 2.5. page 7: the last display equation is incorrect. I believe there are several typos there. The conclusion in line 261 is correct though. Please fix this and explain your correction. 5. Page 7, lines 276-279: It is proposed to use Markov's inequality. Please comment on the boundedness conditions that allow you to do that. Also, Markov's inequality cannot actually get the same guarantee as that of the expectation, there will be some constant factor. Finally, when the procedure is repeated, the right w(i) needs to be selected by estimating L(w(i)) from samples. Please address this. 6. Page 7, line 280: (i) seems to require a small enough lambda. 7. Alg 1, line 7: unclear what is meant by "uniform over the samples". Please rephrase. 8. Page 3, line 101, it is not clear what "on the unit ball" refers to, though I assume this is an assumption on x. Please rephrase. 9. Page 4, line 185: theta is not used.
NIPS
Title Exponential Separations in Symmetric Neural Networks Abstract In this work we demonstrate a novel separation between symmetric neural network architectures. Specifically, we consider the Relational Network [21] architecture as a natural generalization of the DeepSets [32] architecture, and study their representational gap. Under the restriction to analytic activation functions, we construct a symmetric function acting on sets of size N with elements in dimension D, which can be efficiently approximated by the former architecture, but provably requires width exponential in N and D for the latter. N/A In this work we demonstrate a novel separation between symmetric neural network architectures. Specifically, we consider the Relational Network [21] architecture as a natural generalization of the DeepSets [32] architecture, and study their representational gap. Under the restriction to analytic activation functions, we construct a symmetric function acting on sets of size N with elements in dimension D, which can be efficiently approximated by the former architecture, but provably requires width exponential in N and D for the latter. 1 Introduction The modern success of deep learning can in part be attributed to architectures that enforce appropriate invariance. Invariance to permutation of the input, i.e. treating the input as an unordered set, is a desirable property when learning symmetric functions in such fields as particle physics and population statistics. The simplest architectures that enforce permutation invariance treat each set element individually without allowing for interaction, as captured by the popular DeepSet model [18, 32]. Several architectures explicitly enable interaction between set elements, the simplest being the Relational Networks [21] that encode pairwise interaction. This may be understood as an instance of self-attention, the mechanism underlying Transformers [27], which have emerged as powerful generic neural network architectures to process a wide variety of data, from image patches to text to physical data. Specifically, Set Transformers [12] are special instantiations of Transformers, made permutation equivariant by omitting positional encoding of inputs, and using self-attention for pooling. Both the DeepSets and Relational Networks architectures are universal approximators for the class of symmetric functions. But empirical evidence suggests an inherent advantage of symmetric networks using self-attention in synthetic settings [16], on point cloud data [12] and in quantum chemistry [17]. In this work, we formalize this question in terms of approximation power, and explicitly construct symmetric functions which provably require exponentially-many neurons in the DeepSets model, yet are efficiently approximated with self-attention. This exponential separation bears notable differences from typical separation results. In particular, while the expressive power of a vanilla neural network is characterized by depth and width, expressiveness of symmetric networks is controlled particularly by symmetric width. In contrast to depth separations of vanilla neural networks [7], in this work we observe width separations, where the weaker architectures (even with arbitrary depth) require exponential symmetric width to match the expressive power of stronger architectures. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Summary of Contributions In this work: • We demonstrate a width separation between the DeepSets and Relational Network architectures, where the former requires symmetric width L poly(N,D) to approximate a family of analytic symmetric functions, while the latter can approximate with polynomial efficiency. This also answers an open question of high-dimensional DeepSets representation posed in Wagstaff et al. [30] • We introduce an extension of the Hall inner product to high dimensions that preserves low-degree orthogonality of multisymmetric powersum polynomials, which may be of independent interest. 2 Setup and Main Result 2.1 Symmetric Architectures To introduce the symmetric architectures, we must first characterize how to treat sets as inputs. We will consider sets of size N , where each element of the set is a vector of dimension D. In particular, we will represent a set as a matrix X ∈ CD×N . Thus, each column vector xn ∈ CD is an element of the set. Note that we consider complex-valued inputs because the natural inner product over symmetric polynomials integrates over the complex unit circle, see Macdonald [14] or Theorem 4.3. A function f : CD×N → C is symmetric if f(X) = f(XΠ) for any permutation matrix Π ∈ RN×N , i.e. if f is invariant to permuting the columns of X . In other words, a symmetric function treats the input X as an unordered set of column vectors. Given the symmetric width parameter L, we consider two primary symmetric architectures: Definition 2.1. Let SymL denote the class of singleton symmetric networks with symmetric width L, i.e. functions f of the form: f(X) = ρ(φ1(X), . . . , φL(X)) (1) φl(X) = N∑ n=1 ψl(xn) (2) where {ψl : CD → C}Ll=1 and ρ : CL → C are arbitrary neural networks with analytic activations. The class SymL is exactly the architecture of DeepSets [32] restricted to analytic activations. However, we introduce this notation to differentiate this class from the more expressive architectures that allow for pairwise interaction among set elements. From the theory of symmetric polynomials, if L ≥ L∗ := ( N+D N ) − 1, then f ∈ SymL is a universal approximator for any analytic symmetric function [19]. Therefore we will primarily be interested in the expressive power of SymL for L < L ∗. Definition 2.2. Let Sym2L denote the class of pairwise symmetric networks with symmetric width L, i.e. functions f of the form: f(X) = ρ(φ1(X), . . . , φL(X)) (3) φl(X) = N∑ n,n′=1 ψl(xn, xn′) (4) where {ψl : CD×D → C}Ll=1 and ρ : CL → C are arbitrary neural networks with analytic activations. Similarly, the class Sym2L is exactly the architecture of Relational Pooling [21] with analytic activations. We note this architecture is also equivalent to the 2-ary instantiation of Janossy Pooling [16]. 2.2 Main Result Our main result demonstrates an exponential separation, where SymL requires exponentially large symmetric width L to match the expressive power of the class Sym2L for L = 1. We choose norms to make this separation as prominent as possible: there is a hard function that can be approximated in Sym2L in the infinity norm, but cannot be approximated in SymL even in an appropriately chosen L2 norm with respect to some non-trivial data distribution. We require one activation assumption to realize the Sym2L approximation: Assumption 2.3. The activation σ : C → C is analytic, and for a fixed D,N there exist twolayer neural networks f1, f2 using σ, both with O ( D2 +D log D ) width and O(D logD) bounded weights, such that: sup |ξ|≤3 |f1(ξ)− ξ2| ≤ , sup |ξ|≤3 ∣∣∣∣f2(ξ)− (1− (ξ/4)min(D,√N/2)) ξ − 1/4ξ/4− 1 ∣∣∣∣ ≤ (5) Essentially this assumption guarantees that networks built with the analytic activation σ are able to efficiently approximate the map ξ → ξ2, and, a truncated form of the finite Blaschke product[8] with one zero at ξ = 4. We show in Lemma G.3 that the exp activation satisfies this assumption. Theorem 2.4 (Exponential width-separation). Fix N and D > 1, and a non-trivial data distribution µ on D ×N copies of the unit complex circle (S1)D×N . Then there exists an analytic symmetric function g : CD×N → C such that ‖g‖L2(µ) = 1 and: • For L ≤ N−2 exp(O(min(D, √ N)), min f∈SymL ‖f − g‖2L2(µ) ≥ 1 12 . (6) • There exists f ∈ Sym2L with L = 1, parameterized with an activation σ that satisfies Assumption 2.3, with width poly(N,D, 1/ ), depth O(logD), and max weight O(D logD) such that over (S1)D×N : ‖f − g‖∞ ≤ (7) Remark 1. The lower bound is completely independent of the width and depth of the parameterized networks {ψl} and ρ. The only parameter that the theorem restricts is the symmetric width L. This is in sharp contrast to the separations of vanilla networks [7], where there is a natural trade-off between width and depth. Remark 2. In the upper bound, we consider the network f ∈ Sym2L to have width and depth in the usual sense of vanilla neural networks, where the parameterized maps {ψl} and ρ obey the width, depth, and weight bounds given. 3 Related Work 3.1 Depth Separation Numerous works have studied the difference in expressive power between different neural network architectures. Many of these works center on the representational gap between two-layer and threelayer networks [4, 7]. In particular, recent works have focused on generalizing the family of functions that realize these separations, to various radial functions [20] and non-radial functions [28]. A separate line of work considers separations between networks when the depth varies polynomially [24]. Notably, Vardi, Yehudai, and Shamir [26] demonstrates that depth has a greater impact on expressivity than width, in the case of vanilla neural networks. 3.2 Symmetric Architectures We primarily consider the symmetric neural network parameterization as introduced in DeepSets[32], with PointNet[18] a similar symmetric parameterization using a different pooling function. Simple linear equivariant layers were also introduced in Zaheer et al. [32]. In the context of relationships between objects in an image, the first symmetric architecture enabling explicit pairwise interaction was introduced in Santoro et al. [21]. More complicated symmetric architectures, allowing for higher-order interaction and more substantial equivariant layers, were built on top of attention primitives [12, 13]. And the notion of explicit high-order interactions between set elements before symmetrizing is formalized in the architecture of Janossy pooling [16]. Symmetric architectures are generalized by graph neural networks [10, 22], under the restriction to the complete graph. 3.3 Symmetric Network Expressivity The dependence of representational power on the symmetric width parameter Lwas first demonstrated in the D = 1 case. Under the strong condition L < N , it was proven there are symmetric functions which cannot be exactly represented by a DeepSets network [29], and this was later strengthened to functions which cannot be approximated in the infinity norm to arbitrary precision [30]. The work introducing Janossy pooling [16] also includes a theoretical result showing singleton networks cannot exactly represent some particular pairwise symmetric network. Crucially however, this result is restricted to a simplified, non-universal symmetric architecture excluding the ρ transformation, and therefore does not characterize the real-world architectures given above. The question of expressiveness in symmetric networks may also be generalized to graph neural networks, with a focus on distinguishing non-isomorphic graphs as compared to the WeissfelerLehman test[31] and calculating invariants such as substructure counting[3]. In particular, one may understand expressiveness in symmetric networks incorporating pairwise interaction as the ability to learn functions of the complete graph decorated with edge features. 3.4 Symmetric Polynomial Theory Our proofs rely on the technical machinery of symmetric polynomial theory, thoroughly characterized in Macdonald [14]. In particular, we utilize the integral representation of the finite-variable Hall Inner product as introduced in Section A. Because this integral is defined over the complex unit circle, we consequently consider complex-valued neural networks [1]. The connection of symmetric networks to the powersum polynomials was first observed in Zaheer et al. [32], and likewise the multisymmetric powersum polynomials have been applied in higher dimensional symmetric problems [15, 23]. The algebraic properties of the multisymmetric powersum polynomials are well-studied, for example as a basis of higher dimensional symmetric polynomials [19] and through their algebraic dependencies [6]. However, to the best of our knowledge this is the first work to apply the Hall inner product to symmetric neural networks, and to extend this inner product to yield low-degree orthogonality over the multisymmetric polynomials. 4 Warmup: One-dimensional set elements To begin, we consider the simpler case where D = 1, i.e. where we learn a symmetric function acting on a set of scalars. It was already observed in Zaheer et al. [32] that the universality of DeepSets could be demonstrated by approximating the network with symmetric polynomials. We first demonstrate that through this approximation, we can relate the symmetric width L to expressive power. 4.1 Symmetric Polynomials In order to approximate symmetric networks by symmetric polynomials, we choose a suitable basis. The powersum polynomials serve as the natural choice, as their structure matches that of a singleton symmetric network, and they obey very nice orthogonality properties that we detail below. Definition 4.1. For k ∈ N and x ∈ CN , the normalized powersum polynomial is defined as pk(x) = 1√ k N∑ n=1 xkn with p0(x) = 1. A classical result in symmetric polynomial theory is the existence of an L2 inner product that grants orthogonality for products of powersums. To make this notion explicit and keep track of products, we index products with partitions. Definition 4.2. An integer partition λ is non-increasing, finite sequence of positive integers λ1 ≥ λ2 ≥ · · · ≥ λk. The weight of the partition is given by |λ| = ∑k i=1 λi. The length of a partition l(λ) is the number of terms in the sequence. Then we characterize a product of powersums by: pλ(x) = ∏ i pλi(x) (8) This notation intentionally also allows for the empty partition, such that if λ = ∅ then pλ = 1. All together, we can now state the following remarkable fact: Theorem 4.3 ([14, Chapter VI (9.10)] ). There exists a L2(dν) inner product (for some probability measure ν) such that, for partitions λ, µ with |λ| ≤ N : 〈pλ, pµ〉V = zλ1λ=µ (9) where zλ is some combinatorial constant. We index this inner product with V because it is written as an expectation with respect to a density proportional to the squared Vandermonde polynomial (see Section A for the precise definition). This inner product may also be considered the finite-variable specialization of the Hall inner product, defined on symmetric polynomials over infinitely many variables [14, Chapter I (4.5)]. It’s easy to check that the degree of pλ is equal to |λ|. So this theorem states that the powersum terms pλ are "almost" an orthogonal basis, except for correlation between two high-degree terms. Let us remark that we assume analytic activations for the sake of this theorem, as the orthogonality property does not hold for symmetric polynomials with negative exponents. However, in exchange for that assumption we can apply this very powerful inner product, that ultimately results in the irrelevance of network depth. 4.2 Projection Lemma Before we can proceed to prove a representational lower bound, we need one tool to better understand f ∈ SymL. Utilizing the orthogonality properties of the inner product 〈·, ·〉V allows us to project any f ∈ SymL to a simplified form, while keeping a straightforward dependence on L. For example, consider some uniformly convergent power series (with no constant term) φ(x) =∑∞ i=1 cikpk(x). We claim 〈p2p1, φ3〉V = 0. Indeed, expanding φ3, one exclusively gets terms of the form pk1pk2pk3 , and because the partition {k1, k2, k3} is of a different length than {2, 1}, they are clearly distinct partitions so by orthogonality 〈p2p1, pk1pk2pk3〉V = 0. Motivated by this observation, we can project f to only contain products of two terms. Let us introduce P1 to be the orthogonal projection onto span({pt : 1 ≤ t ≤ N/2}), and P2 to be the orthogonal projection onto span({ptpt′ : 1 ≤ t, t′ ≤ N/2}). Lemma 4.4. Given any f ∈ SymL, we may choose coefficients vij over i ≤ j ≤ L, and symmetric polynomials φi over i ≤ L, such that: P2f = L∑ i≤j vij(P1φi)(P2φj) (10) 4.3 Rank Lemma Given the reduced form of f above, we may now go about lower bounding its approximation error to a given function g. By the properties of orthogonal projection, we have ‖f − g‖2V ≥ ‖P2(f − g)‖2V . And by Parseval’s theorem, the function approximation error ‖P2f − P2g‖2V equals∑ t≤t′ (〈 P2f, ptpt′ ‖ptpt′‖V 〉 V − 〈 P2g, ptpt′ ‖ptpt′‖V 〉 V )2 . Rearranging the orthogonal coefficients in the form of matrices, we have the following fact: Lemma 4.5. Given any f ∈ SymL, and g such that P2g = g, we have the bound ‖P2f − P2g‖2V ≥ 1 2 ‖F −G‖2F (11) where F,G ∈ CN/2×N/2 are matrices with entries Ftt′ = 〈P2f, ptpt′〉V , Gtt′ = 〈P2g, ptpt′〉V . Furthermore, F has maximum rank L. The significance of this lemma is the rank constraint: it implies that choosing symmetric width L corresponds to a maximum rank L on the matrix F . From here, we can use standard arguments about low-rank approximation in the Frobenius norm to yield a lower bound. 4.4 Separation in one-dimensional case Our main goal in this section is to construct a hard symmetric function g that cannot be efficiently approximated by SymL for L ≤ N/4. It is not particularly expensive for the symmetric width L to scale linearly with the set sizeN : however, we will use the same proof structure to prove Theorem 2.4, which will require L to scale exponentially. Theorem 4.6. For D = 1: max ‖g‖V =1 min f∈SymL ‖f − g‖2V ≥ 1− 2L N (12) In particular, for L = N4 we recover a constant lower bound of 1 2 . Proof (sketch). Choose g such that P2g = g. Then because P2 is an orthogonal projection and applying Lemma 4.5: min f∈SymL ‖f − g‖2V ≥ min f∈SymL ‖P2f − P2g‖2V (13) ≥ 1 2 min rank(F )≤L ‖F −G‖2F (14) We note that ‖ptpt‖2V = z{t,t} = 2, so the choice of g = 1√N ∑N/2 t=1 ptpt can be seen to obey ‖g‖V = 1, and implies that G is the scaled identity matrix 2√N I ∈ C N/2×N/2. Then by standard properties of the SVD: min f∈SymL ‖f − g‖2V ≥ 1 2 min rank(F )≤L ‖F − 2√ N I‖2F (15) = 1 N/2 min rank(F )≤L ‖F − I‖2F (16) = 1 N/2 (N/2− L) (17) = 1− 2L N (18) 5 Proof Sketch of Main Result 5.1 Challenges for High-dimensional Set Elements We’d like to strengthen this separation in several ways: • Generalize to the D > 1 case, • Realize a separation where the symmetric width L must scale exponentially in N and D, showing that SymL is infeasible, • Show the hard function g can nevertheless be efficiently approximated in Sym2L for L polynomial in N and D First, in order to approximate via polynomials in the high-dimenionsal case, we will require the high-dimensional analogue of powersum polynomials: Definition 5.1. For a multi-index α ∈ ND, the normalized multisymmetric powersum polynomial is defined as: pα(X) = 1√ |α| ∑ n ∏ d xαddn . (19) So the plan is to find a high-dimensional analogue of Lemma 4.4 and Lemma 4.5, now using multisymmetric powersum polynomials, mimic the proof of the D = 1 case, and then additionally show the hard function g is efficiently computable in the pairwise symmetric architecture. Note that because the algebraic basis of multisymmetric powersum polynomials is of size L∗ = ( N+D N ) − 1, we can expect an exponential separation when we apply a similar rank argument.1 1We subtract one in order to discount the constant polynomial. 5.2 Sketch of Main result (lower bound) Because we are in high dimensions, we cannot simply apply the restricted Hall inner product introduced in Theorem 4.3. To the best of our knowledge, there is no standard generalization of the Hall inner product to multi-symmetric polynomials that preserves the orthogonality property. For the main technical ingredient in the high-dimensional case we introduce a novel generalization, which builds on two inner products. First, we introduce a new input distribution ν over set inputs X ∈ CD×N , and induce an L2 inner product: 〈f, g〉A = EX∼ν [ f(X)g(X) ] . (20) We use this inner product to measure the approximation error of SymL. That is, we seek a lower bound to minf∈SymL ‖f − g‖A, for a suitable choice of hard function g. We can now apply an analogue of Lemma 4.4 to project f to a simplified form. But we cannot immediately apply an analogue of Lemma 4.5, as it relied on Parseval’s theorem and the low-degree multisymmetric powersum polynomials are not orthogonal in this inner product. Put another way, if we represent 〈·, ·〉A as a matrix in the basis of low-degree multisymmetric powersums, it will be positive-definite but include some off-diagonal terms. The idea is to now introduce a new inner product with a different input distribution ν0 〈f, g〉A0 = EX∼ν0 [ f(X)g(X) ] , (21) and define the bilinear form 〈f, g〉∗ = 〈f, g〉A − 2〈f, g〉A0 . (22) Typically positive-definiteness is lost when subtracting two inner products, but we prove that 〈·, ·〉∗ is an inner product when restricted to a particular subspace of symmetric polynomials (see Theorem D.3). Furthermore, the careful choice of ν and ν0 cancels the off-diagonal correlation of different multisymmetric powersums, so they are orthogonal under this new inner product 〈·, ·〉∗. By the norm domination ‖ · ‖A ≥ ‖ · ‖∗, we are able to pass from the former L2 norm to the latter norm that obeys orthogonality, and apply an analogue of the Rank Lemma 4.5. Thus we derive a lower bound using any hard function g whose corresponding matrix G (built from orthogonal coefficients) is diagonal and high-rank. And because the total number of polynomials is L∗, the rank argument now yields an exponential separation. Based on this proof, we have much freedom in our choice of g. By choosing its coefficients in the basis of multisymmetric powersum polynomials, it’s easy to enforce the conditions that G is diagonal and high-rank for variety of possible functions. However, ensuring that g is not pathological (i.e. that it is bounded and Lipschitz), and can be efficiently approximated in Sym2L, requires a more careful choice. 5.3 Sketch of Main Result (upper bound) It remains to approximate the hard function g with a network from Sym2L. First we must make a choice of g in particular. Based on the lower bound proof, the desiderata for g is that it is supported exclusively on terms of the form pαpα over many values of α, as this induces a diagonal and high-rank matrix G in an analogue of Lemma 4.5. Furthermore, by simple algebra one can confirm that pα(X)pα(X) = 1 |α| ∑ n,n′ ∏D d=1(xdnxdn′) αd , so g supported on these polynomials can clearly be written in the form of a network in Sym2L. This structure of g guarantees difficult approximation, and is akin to the radial structure of the hard functions introduced in works on depth separation [7]. We must however be careful in our choice of g: for the matrix G to be high-rank, g must be supported on exponentially many powersum polynomials. But this could make ‖g‖∞ exponentially large, and therefore challenging to approximate efficiently with a network from Sym2L. We handle this difficulty by defining g in a different way. We introduce a finite Blaschke product µ(ξ) = ξ−1/4ξ/4−1 , a function that analytically maps the unit complex circle to itself. Then the choice g(X) = N∑ n,n′=1 D∏ d=1 µ(xdnxdn′) (23) ensures that ‖g‖∞, ‖g‖A, and Lip(g) are all polynomial in N,D, 1 for approximation error (see Lemma E.3). Furthermore, again from simple algebra it is clear that g is only supported on terms of the form pαpα. So it remains to show that the induced diagonal matrix G is effectively high rank, which follows from expanding the Blaschke products. Satisfied that this choice of g will meet the desiderata for the lower bound, and has no pathological behavior, it remains to construct f ∈ Sym2L for L = 1 that approximates g. That is, choose ψ1 and ρ so that g(X) ≈ ρ (∑N n,n′=1 ψ1(xn, xn′) ) . Clearly we may take ρ to be the identity, and ψ1(xn, xn′) to approximate ∏D d=1 µ(xdnxdn′), which is straightforwardly calculated in depth O(logD) by performing successive multiplications in a binary-tree like structure (see Theorem F.1). Ultimately, we use a slight variant of this function for the formal proof. Because the orthogonality of our newly introduced inner product 〈·, ·〉∗ only holds for low-degree polynomials, we must truncate high-degree terms of g; we confirm in Appendix F that this truncation nevertheless preserves the properties we care about. 6 Discussion In this work, we’ve demonstrated how symmetric width captures more of the expressive power of symmetric networks than depth when restricted to analytic activations, by evincing an exponential separation between two of the most common architectures that enforce permutation invariance. The most unusual property of this result is the complete independence of depth, owing to the unique orthogonality properties of the restricted Hall inner product when paired with the assumption of analyticity. This stands in contrast to the case of vanilla neural networks, for which separations beyond small depth would resolve open questions in circuit complexity suspected to be quite hard [25]. Furthermore, the greater dependence on width than depth is a unique property to symmetric networks, whereas the opposite is true for vanilla networks [26]. A natural extension would be to consider the simple equivariant layers introduced in Zaheer et al. [32], which we suspect will not substantially improve approximation power of SymL. Furthermore, allowing for multiple such equivariant layers, this network becomes exactly akin to a Graph Convolutional Network [10] on a complete graph, whereas Sym2L corresponds to a message passing network [9] as it is capable of interpreting edge features. 6.1 Limitations The major limitation of this result is the restriction to analytic functions. Although analytic symmetric functions nevertheless appear crucially in the study of exactly solvable quantum systems [2, 11], this assumption may be be overly strict for general problems of learning symmetric functions. We nevertheless conjecture that these bounds will still hold even allowing for non-analytic activations, and consider this an exciting question for future work. Additionally, whether the hard function g can be efficiently learned with gradient descent remains unclear, and future work could touch on the learnability. Acknowledgements: This work has been partially supported by the Alfred P. Sloan Foundation, NSF RI-1816753, NSF CAREER CIF-1845360, and NSF CCF-1814524.
1. What is the focus of the paper regarding symmetric neural networks and their expressive power? 2. What are the strengths of the proposed approach, particularly in terms of proof technique and mathematical tools? 3. What are the weaknesses of the paper, especially regarding its motivation and essential use of complex numbers and analytic activations? 4. Do you have any concerns regarding the separating construction used in the paper, considering its intricacy and depth? 5. Can you provide examples of practical problems where Relational Networks might outperform DeepSets, improving the paper's motivation? 6. Are complex networks used in practice, justifying their assumption in the paper? 7. Can you give examples of commonly used activations that satisfy Assumption 2.3, making it less of a limitation?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper studies the expressive power of symmetric neural networks used to compute functions on sets, specifically Relational Networks and DeepSets architectures. An exponential separation result is given where a certain function can be represented efficiently using a Relational Network with symmetric width L=1, yet cannot be approximated by a DeepSet architecture to accuracy better than a constant unless its symmetric width is exponential in either of N , the size of the set; or D , where D is the dimension of each element. Strengths And Weaknesses Strengths: The paper is well-written and easy to follow. The proof technique used bears a certain (subjective) elegance. The mathematical tools used in the proof which leverage symmetric polynomials to establish the main result seem like the natural machinery for studying the expressive power of symmetric architectures, and the proof idea seems like a clever use of these tools. Weaknesses: It feels like the paper does not motivate the study of the problem addressed in it sufficiently. To give a concrete comparison, the separation in [1] is well-motivated since it was widely observed that deeper architectures are much more successful in many learning tasks. It is not clear if this is also the case when using Relational Networks instead of DeepSets. The proof technique seems to make an essential use of complex numbers and analytic activations. While the authors are overall transparent about these weaknesses, it doesn't feel like they are properly discussed. It was shown in [2] that certain functions used to separate depth cannot be learned using gradient methods. Since the construction used to approximate the target function using a Relational Network seems intricate and also uses a relatively deep architecture, I'm not sure if it can be learned efficiently. While I find the result in the paper interesting nevertheless, I think it would be appropriate to add a comment about this. Citations used: [1] - Eldan and Shamir: The Power of Depth for Feedforward Neural Networks [2] - Malach et al.: The Connection Between Approximation, Depth Separation and Learnability in Neural Networks Questions Can you motivate the study of this problem further? I'm curious to know if similarly to Eldan and Shamir, there are known practical problems where Relational Networks outperform DeepSets. Such examples will make the paper better motivated in my opinion. Are complex networks used in practice in any context? This will make the assumption on using complex numbers much more justified. In the paper, you clearly state the analyticity assumption as a weakness in your paper. You do not discuss, however, what commonly used activations satisfy your assumption. Is Assumption 2.3 satisfied by any sigmoidal activation, e.g. logistic, tanh, or arctan? Providing an example beyond the exponential activation will make this weakness much more mild. Limitations Yes
NIPS
Title Exponential Separations in Symmetric Neural Networks Abstract In this work we demonstrate a novel separation between symmetric neural network architectures. Specifically, we consider the Relational Network [21] architecture as a natural generalization of the DeepSets [32] architecture, and study their representational gap. Under the restriction to analytic activation functions, we construct a symmetric function acting on sets of size N with elements in dimension D, which can be efficiently approximated by the former architecture, but provably requires width exponential in N and D for the latter. N/A In this work we demonstrate a novel separation between symmetric neural network architectures. Specifically, we consider the Relational Network [21] architecture as a natural generalization of the DeepSets [32] architecture, and study their representational gap. Under the restriction to analytic activation functions, we construct a symmetric function acting on sets of size N with elements in dimension D, which can be efficiently approximated by the former architecture, but provably requires width exponential in N and D for the latter. 1 Introduction The modern success of deep learning can in part be attributed to architectures that enforce appropriate invariance. Invariance to permutation of the input, i.e. treating the input as an unordered set, is a desirable property when learning symmetric functions in such fields as particle physics and population statistics. The simplest architectures that enforce permutation invariance treat each set element individually without allowing for interaction, as captured by the popular DeepSet model [18, 32]. Several architectures explicitly enable interaction between set elements, the simplest being the Relational Networks [21] that encode pairwise interaction. This may be understood as an instance of self-attention, the mechanism underlying Transformers [27], which have emerged as powerful generic neural network architectures to process a wide variety of data, from image patches to text to physical data. Specifically, Set Transformers [12] are special instantiations of Transformers, made permutation equivariant by omitting positional encoding of inputs, and using self-attention for pooling. Both the DeepSets and Relational Networks architectures are universal approximators for the class of symmetric functions. But empirical evidence suggests an inherent advantage of symmetric networks using self-attention in synthetic settings [16], on point cloud data [12] and in quantum chemistry [17]. In this work, we formalize this question in terms of approximation power, and explicitly construct symmetric functions which provably require exponentially-many neurons in the DeepSets model, yet are efficiently approximated with self-attention. This exponential separation bears notable differences from typical separation results. In particular, while the expressive power of a vanilla neural network is characterized by depth and width, expressiveness of symmetric networks is controlled particularly by symmetric width. In contrast to depth separations of vanilla neural networks [7], in this work we observe width separations, where the weaker architectures (even with arbitrary depth) require exponential symmetric width to match the expressive power of stronger architectures. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Summary of Contributions In this work: • We demonstrate a width separation between the DeepSets and Relational Network architectures, where the former requires symmetric width L poly(N,D) to approximate a family of analytic symmetric functions, while the latter can approximate with polynomial efficiency. This also answers an open question of high-dimensional DeepSets representation posed in Wagstaff et al. [30] • We introduce an extension of the Hall inner product to high dimensions that preserves low-degree orthogonality of multisymmetric powersum polynomials, which may be of independent interest. 2 Setup and Main Result 2.1 Symmetric Architectures To introduce the symmetric architectures, we must first characterize how to treat sets as inputs. We will consider sets of size N , where each element of the set is a vector of dimension D. In particular, we will represent a set as a matrix X ∈ CD×N . Thus, each column vector xn ∈ CD is an element of the set. Note that we consider complex-valued inputs because the natural inner product over symmetric polynomials integrates over the complex unit circle, see Macdonald [14] or Theorem 4.3. A function f : CD×N → C is symmetric if f(X) = f(XΠ) for any permutation matrix Π ∈ RN×N , i.e. if f is invariant to permuting the columns of X . In other words, a symmetric function treats the input X as an unordered set of column vectors. Given the symmetric width parameter L, we consider two primary symmetric architectures: Definition 2.1. Let SymL denote the class of singleton symmetric networks with symmetric width L, i.e. functions f of the form: f(X) = ρ(φ1(X), . . . , φL(X)) (1) φl(X) = N∑ n=1 ψl(xn) (2) where {ψl : CD → C}Ll=1 and ρ : CL → C are arbitrary neural networks with analytic activations. The class SymL is exactly the architecture of DeepSets [32] restricted to analytic activations. However, we introduce this notation to differentiate this class from the more expressive architectures that allow for pairwise interaction among set elements. From the theory of symmetric polynomials, if L ≥ L∗ := ( N+D N ) − 1, then f ∈ SymL is a universal approximator for any analytic symmetric function [19]. Therefore we will primarily be interested in the expressive power of SymL for L < L ∗. Definition 2.2. Let Sym2L denote the class of pairwise symmetric networks with symmetric width L, i.e. functions f of the form: f(X) = ρ(φ1(X), . . . , φL(X)) (3) φl(X) = N∑ n,n′=1 ψl(xn, xn′) (4) where {ψl : CD×D → C}Ll=1 and ρ : CL → C are arbitrary neural networks with analytic activations. Similarly, the class Sym2L is exactly the architecture of Relational Pooling [21] with analytic activations. We note this architecture is also equivalent to the 2-ary instantiation of Janossy Pooling [16]. 2.2 Main Result Our main result demonstrates an exponential separation, where SymL requires exponentially large symmetric width L to match the expressive power of the class Sym2L for L = 1. We choose norms to make this separation as prominent as possible: there is a hard function that can be approximated in Sym2L in the infinity norm, but cannot be approximated in SymL even in an appropriately chosen L2 norm with respect to some non-trivial data distribution. We require one activation assumption to realize the Sym2L approximation: Assumption 2.3. The activation σ : C → C is analytic, and for a fixed D,N there exist twolayer neural networks f1, f2 using σ, both with O ( D2 +D log D ) width and O(D logD) bounded weights, such that: sup |ξ|≤3 |f1(ξ)− ξ2| ≤ , sup |ξ|≤3 ∣∣∣∣f2(ξ)− (1− (ξ/4)min(D,√N/2)) ξ − 1/4ξ/4− 1 ∣∣∣∣ ≤ (5) Essentially this assumption guarantees that networks built with the analytic activation σ are able to efficiently approximate the map ξ → ξ2, and, a truncated form of the finite Blaschke product[8] with one zero at ξ = 4. We show in Lemma G.3 that the exp activation satisfies this assumption. Theorem 2.4 (Exponential width-separation). Fix N and D > 1, and a non-trivial data distribution µ on D ×N copies of the unit complex circle (S1)D×N . Then there exists an analytic symmetric function g : CD×N → C such that ‖g‖L2(µ) = 1 and: • For L ≤ N−2 exp(O(min(D, √ N)), min f∈SymL ‖f − g‖2L2(µ) ≥ 1 12 . (6) • There exists f ∈ Sym2L with L = 1, parameterized with an activation σ that satisfies Assumption 2.3, with width poly(N,D, 1/ ), depth O(logD), and max weight O(D logD) such that over (S1)D×N : ‖f − g‖∞ ≤ (7) Remark 1. The lower bound is completely independent of the width and depth of the parameterized networks {ψl} and ρ. The only parameter that the theorem restricts is the symmetric width L. This is in sharp contrast to the separations of vanilla networks [7], where there is a natural trade-off between width and depth. Remark 2. In the upper bound, we consider the network f ∈ Sym2L to have width and depth in the usual sense of vanilla neural networks, where the parameterized maps {ψl} and ρ obey the width, depth, and weight bounds given. 3 Related Work 3.1 Depth Separation Numerous works have studied the difference in expressive power between different neural network architectures. Many of these works center on the representational gap between two-layer and threelayer networks [4, 7]. In particular, recent works have focused on generalizing the family of functions that realize these separations, to various radial functions [20] and non-radial functions [28]. A separate line of work considers separations between networks when the depth varies polynomially [24]. Notably, Vardi, Yehudai, and Shamir [26] demonstrates that depth has a greater impact on expressivity than width, in the case of vanilla neural networks. 3.2 Symmetric Architectures We primarily consider the symmetric neural network parameterization as introduced in DeepSets[32], with PointNet[18] a similar symmetric parameterization using a different pooling function. Simple linear equivariant layers were also introduced in Zaheer et al. [32]. In the context of relationships between objects in an image, the first symmetric architecture enabling explicit pairwise interaction was introduced in Santoro et al. [21]. More complicated symmetric architectures, allowing for higher-order interaction and more substantial equivariant layers, were built on top of attention primitives [12, 13]. And the notion of explicit high-order interactions between set elements before symmetrizing is formalized in the architecture of Janossy pooling [16]. Symmetric architectures are generalized by graph neural networks [10, 22], under the restriction to the complete graph. 3.3 Symmetric Network Expressivity The dependence of representational power on the symmetric width parameter Lwas first demonstrated in the D = 1 case. Under the strong condition L < N , it was proven there are symmetric functions which cannot be exactly represented by a DeepSets network [29], and this was later strengthened to functions which cannot be approximated in the infinity norm to arbitrary precision [30]. The work introducing Janossy pooling [16] also includes a theoretical result showing singleton networks cannot exactly represent some particular pairwise symmetric network. Crucially however, this result is restricted to a simplified, non-universal symmetric architecture excluding the ρ transformation, and therefore does not characterize the real-world architectures given above. The question of expressiveness in symmetric networks may also be generalized to graph neural networks, with a focus on distinguishing non-isomorphic graphs as compared to the WeissfelerLehman test[31] and calculating invariants such as substructure counting[3]. In particular, one may understand expressiveness in symmetric networks incorporating pairwise interaction as the ability to learn functions of the complete graph decorated with edge features. 3.4 Symmetric Polynomial Theory Our proofs rely on the technical machinery of symmetric polynomial theory, thoroughly characterized in Macdonald [14]. In particular, we utilize the integral representation of the finite-variable Hall Inner product as introduced in Section A. Because this integral is defined over the complex unit circle, we consequently consider complex-valued neural networks [1]. The connection of symmetric networks to the powersum polynomials was first observed in Zaheer et al. [32], and likewise the multisymmetric powersum polynomials have been applied in higher dimensional symmetric problems [15, 23]. The algebraic properties of the multisymmetric powersum polynomials are well-studied, for example as a basis of higher dimensional symmetric polynomials [19] and through their algebraic dependencies [6]. However, to the best of our knowledge this is the first work to apply the Hall inner product to symmetric neural networks, and to extend this inner product to yield low-degree orthogonality over the multisymmetric polynomials. 4 Warmup: One-dimensional set elements To begin, we consider the simpler case where D = 1, i.e. where we learn a symmetric function acting on a set of scalars. It was already observed in Zaheer et al. [32] that the universality of DeepSets could be demonstrated by approximating the network with symmetric polynomials. We first demonstrate that through this approximation, we can relate the symmetric width L to expressive power. 4.1 Symmetric Polynomials In order to approximate symmetric networks by symmetric polynomials, we choose a suitable basis. The powersum polynomials serve as the natural choice, as their structure matches that of a singleton symmetric network, and they obey very nice orthogonality properties that we detail below. Definition 4.1. For k ∈ N and x ∈ CN , the normalized powersum polynomial is defined as pk(x) = 1√ k N∑ n=1 xkn with p0(x) = 1. A classical result in symmetric polynomial theory is the existence of an L2 inner product that grants orthogonality for products of powersums. To make this notion explicit and keep track of products, we index products with partitions. Definition 4.2. An integer partition λ is non-increasing, finite sequence of positive integers λ1 ≥ λ2 ≥ · · · ≥ λk. The weight of the partition is given by |λ| = ∑k i=1 λi. The length of a partition l(λ) is the number of terms in the sequence. Then we characterize a product of powersums by: pλ(x) = ∏ i pλi(x) (8) This notation intentionally also allows for the empty partition, such that if λ = ∅ then pλ = 1. All together, we can now state the following remarkable fact: Theorem 4.3 ([14, Chapter VI (9.10)] ). There exists a L2(dν) inner product (for some probability measure ν) such that, for partitions λ, µ with |λ| ≤ N : 〈pλ, pµ〉V = zλ1λ=µ (9) where zλ is some combinatorial constant. We index this inner product with V because it is written as an expectation with respect to a density proportional to the squared Vandermonde polynomial (see Section A for the precise definition). This inner product may also be considered the finite-variable specialization of the Hall inner product, defined on symmetric polynomials over infinitely many variables [14, Chapter I (4.5)]. It’s easy to check that the degree of pλ is equal to |λ|. So this theorem states that the powersum terms pλ are "almost" an orthogonal basis, except for correlation between two high-degree terms. Let us remark that we assume analytic activations for the sake of this theorem, as the orthogonality property does not hold for symmetric polynomials with negative exponents. However, in exchange for that assumption we can apply this very powerful inner product, that ultimately results in the irrelevance of network depth. 4.2 Projection Lemma Before we can proceed to prove a representational lower bound, we need one tool to better understand f ∈ SymL. Utilizing the orthogonality properties of the inner product 〈·, ·〉V allows us to project any f ∈ SymL to a simplified form, while keeping a straightforward dependence on L. For example, consider some uniformly convergent power series (with no constant term) φ(x) =∑∞ i=1 cikpk(x). We claim 〈p2p1, φ3〉V = 0. Indeed, expanding φ3, one exclusively gets terms of the form pk1pk2pk3 , and because the partition {k1, k2, k3} is of a different length than {2, 1}, they are clearly distinct partitions so by orthogonality 〈p2p1, pk1pk2pk3〉V = 0. Motivated by this observation, we can project f to only contain products of two terms. Let us introduce P1 to be the orthogonal projection onto span({pt : 1 ≤ t ≤ N/2}), and P2 to be the orthogonal projection onto span({ptpt′ : 1 ≤ t, t′ ≤ N/2}). Lemma 4.4. Given any f ∈ SymL, we may choose coefficients vij over i ≤ j ≤ L, and symmetric polynomials φi over i ≤ L, such that: P2f = L∑ i≤j vij(P1φi)(P2φj) (10) 4.3 Rank Lemma Given the reduced form of f above, we may now go about lower bounding its approximation error to a given function g. By the properties of orthogonal projection, we have ‖f − g‖2V ≥ ‖P2(f − g)‖2V . And by Parseval’s theorem, the function approximation error ‖P2f − P2g‖2V equals∑ t≤t′ (〈 P2f, ptpt′ ‖ptpt′‖V 〉 V − 〈 P2g, ptpt′ ‖ptpt′‖V 〉 V )2 . Rearranging the orthogonal coefficients in the form of matrices, we have the following fact: Lemma 4.5. Given any f ∈ SymL, and g such that P2g = g, we have the bound ‖P2f − P2g‖2V ≥ 1 2 ‖F −G‖2F (11) where F,G ∈ CN/2×N/2 are matrices with entries Ftt′ = 〈P2f, ptpt′〉V , Gtt′ = 〈P2g, ptpt′〉V . Furthermore, F has maximum rank L. The significance of this lemma is the rank constraint: it implies that choosing symmetric width L corresponds to a maximum rank L on the matrix F . From here, we can use standard arguments about low-rank approximation in the Frobenius norm to yield a lower bound. 4.4 Separation in one-dimensional case Our main goal in this section is to construct a hard symmetric function g that cannot be efficiently approximated by SymL for L ≤ N/4. It is not particularly expensive for the symmetric width L to scale linearly with the set sizeN : however, we will use the same proof structure to prove Theorem 2.4, which will require L to scale exponentially. Theorem 4.6. For D = 1: max ‖g‖V =1 min f∈SymL ‖f − g‖2V ≥ 1− 2L N (12) In particular, for L = N4 we recover a constant lower bound of 1 2 . Proof (sketch). Choose g such that P2g = g. Then because P2 is an orthogonal projection and applying Lemma 4.5: min f∈SymL ‖f − g‖2V ≥ min f∈SymL ‖P2f − P2g‖2V (13) ≥ 1 2 min rank(F )≤L ‖F −G‖2F (14) We note that ‖ptpt‖2V = z{t,t} = 2, so the choice of g = 1√N ∑N/2 t=1 ptpt can be seen to obey ‖g‖V = 1, and implies that G is the scaled identity matrix 2√N I ∈ C N/2×N/2. Then by standard properties of the SVD: min f∈SymL ‖f − g‖2V ≥ 1 2 min rank(F )≤L ‖F − 2√ N I‖2F (15) = 1 N/2 min rank(F )≤L ‖F − I‖2F (16) = 1 N/2 (N/2− L) (17) = 1− 2L N (18) 5 Proof Sketch of Main Result 5.1 Challenges for High-dimensional Set Elements We’d like to strengthen this separation in several ways: • Generalize to the D > 1 case, • Realize a separation where the symmetric width L must scale exponentially in N and D, showing that SymL is infeasible, • Show the hard function g can nevertheless be efficiently approximated in Sym2L for L polynomial in N and D First, in order to approximate via polynomials in the high-dimenionsal case, we will require the high-dimensional analogue of powersum polynomials: Definition 5.1. For a multi-index α ∈ ND, the normalized multisymmetric powersum polynomial is defined as: pα(X) = 1√ |α| ∑ n ∏ d xαddn . (19) So the plan is to find a high-dimensional analogue of Lemma 4.4 and Lemma 4.5, now using multisymmetric powersum polynomials, mimic the proof of the D = 1 case, and then additionally show the hard function g is efficiently computable in the pairwise symmetric architecture. Note that because the algebraic basis of multisymmetric powersum polynomials is of size L∗ = ( N+D N ) − 1, we can expect an exponential separation when we apply a similar rank argument.1 1We subtract one in order to discount the constant polynomial. 5.2 Sketch of Main result (lower bound) Because we are in high dimensions, we cannot simply apply the restricted Hall inner product introduced in Theorem 4.3. To the best of our knowledge, there is no standard generalization of the Hall inner product to multi-symmetric polynomials that preserves the orthogonality property. For the main technical ingredient in the high-dimensional case we introduce a novel generalization, which builds on two inner products. First, we introduce a new input distribution ν over set inputs X ∈ CD×N , and induce an L2 inner product: 〈f, g〉A = EX∼ν [ f(X)g(X) ] . (20) We use this inner product to measure the approximation error of SymL. That is, we seek a lower bound to minf∈SymL ‖f − g‖A, for a suitable choice of hard function g. We can now apply an analogue of Lemma 4.4 to project f to a simplified form. But we cannot immediately apply an analogue of Lemma 4.5, as it relied on Parseval’s theorem and the low-degree multisymmetric powersum polynomials are not orthogonal in this inner product. Put another way, if we represent 〈·, ·〉A as a matrix in the basis of low-degree multisymmetric powersums, it will be positive-definite but include some off-diagonal terms. The idea is to now introduce a new inner product with a different input distribution ν0 〈f, g〉A0 = EX∼ν0 [ f(X)g(X) ] , (21) and define the bilinear form 〈f, g〉∗ = 〈f, g〉A − 2〈f, g〉A0 . (22) Typically positive-definiteness is lost when subtracting two inner products, but we prove that 〈·, ·〉∗ is an inner product when restricted to a particular subspace of symmetric polynomials (see Theorem D.3). Furthermore, the careful choice of ν and ν0 cancels the off-diagonal correlation of different multisymmetric powersums, so they are orthogonal under this new inner product 〈·, ·〉∗. By the norm domination ‖ · ‖A ≥ ‖ · ‖∗, we are able to pass from the former L2 norm to the latter norm that obeys orthogonality, and apply an analogue of the Rank Lemma 4.5. Thus we derive a lower bound using any hard function g whose corresponding matrix G (built from orthogonal coefficients) is diagonal and high-rank. And because the total number of polynomials is L∗, the rank argument now yields an exponential separation. Based on this proof, we have much freedom in our choice of g. By choosing its coefficients in the basis of multisymmetric powersum polynomials, it’s easy to enforce the conditions that G is diagonal and high-rank for variety of possible functions. However, ensuring that g is not pathological (i.e. that it is bounded and Lipschitz), and can be efficiently approximated in Sym2L, requires a more careful choice. 5.3 Sketch of Main Result (upper bound) It remains to approximate the hard function g with a network from Sym2L. First we must make a choice of g in particular. Based on the lower bound proof, the desiderata for g is that it is supported exclusively on terms of the form pαpα over many values of α, as this induces a diagonal and high-rank matrix G in an analogue of Lemma 4.5. Furthermore, by simple algebra one can confirm that pα(X)pα(X) = 1 |α| ∑ n,n′ ∏D d=1(xdnxdn′) αd , so g supported on these polynomials can clearly be written in the form of a network in Sym2L. This structure of g guarantees difficult approximation, and is akin to the radial structure of the hard functions introduced in works on depth separation [7]. We must however be careful in our choice of g: for the matrix G to be high-rank, g must be supported on exponentially many powersum polynomials. But this could make ‖g‖∞ exponentially large, and therefore challenging to approximate efficiently with a network from Sym2L. We handle this difficulty by defining g in a different way. We introduce a finite Blaschke product µ(ξ) = ξ−1/4ξ/4−1 , a function that analytically maps the unit complex circle to itself. Then the choice g(X) = N∑ n,n′=1 D∏ d=1 µ(xdnxdn′) (23) ensures that ‖g‖∞, ‖g‖A, and Lip(g) are all polynomial in N,D, 1 for approximation error (see Lemma E.3). Furthermore, again from simple algebra it is clear that g is only supported on terms of the form pαpα. So it remains to show that the induced diagonal matrix G is effectively high rank, which follows from expanding the Blaschke products. Satisfied that this choice of g will meet the desiderata for the lower bound, and has no pathological behavior, it remains to construct f ∈ Sym2L for L = 1 that approximates g. That is, choose ψ1 and ρ so that g(X) ≈ ρ (∑N n,n′=1 ψ1(xn, xn′) ) . Clearly we may take ρ to be the identity, and ψ1(xn, xn′) to approximate ∏D d=1 µ(xdnxdn′), which is straightforwardly calculated in depth O(logD) by performing successive multiplications in a binary-tree like structure (see Theorem F.1). Ultimately, we use a slight variant of this function for the formal proof. Because the orthogonality of our newly introduced inner product 〈·, ·〉∗ only holds for low-degree polynomials, we must truncate high-degree terms of g; we confirm in Appendix F that this truncation nevertheless preserves the properties we care about. 6 Discussion In this work, we’ve demonstrated how symmetric width captures more of the expressive power of symmetric networks than depth when restricted to analytic activations, by evincing an exponential separation between two of the most common architectures that enforce permutation invariance. The most unusual property of this result is the complete independence of depth, owing to the unique orthogonality properties of the restricted Hall inner product when paired with the assumption of analyticity. This stands in contrast to the case of vanilla neural networks, for which separations beyond small depth would resolve open questions in circuit complexity suspected to be quite hard [25]. Furthermore, the greater dependence on width than depth is a unique property to symmetric networks, whereas the opposite is true for vanilla networks [26]. A natural extension would be to consider the simple equivariant layers introduced in Zaheer et al. [32], which we suspect will not substantially improve approximation power of SymL. Furthermore, allowing for multiple such equivariant layers, this network becomes exactly akin to a Graph Convolutional Network [10] on a complete graph, whereas Sym2L corresponds to a message passing network [9] as it is capable of interpreting edge features. 6.1 Limitations The major limitation of this result is the restriction to analytic functions. Although analytic symmetric functions nevertheless appear crucially in the study of exactly solvable quantum systems [2, 11], this assumption may be be overly strict for general problems of learning symmetric functions. We nevertheless conjecture that these bounds will still hold even allowing for non-analytic activations, and consider this an exciting question for future work. Additionally, whether the hard function g can be efficiently learned with gradient descent remains unclear, and future work could touch on the learnability. Acknowledgements: This work has been partially supported by the Alfred P. Sloan Foundation, NSF RI-1816753, NSF CAREER CIF-1845360, and NSF CCF-1814524.
1. What is the focus of the paper regarding neural network functions and their representation power? 2. What are the strengths of the proposed approach, particularly in terms of its technical complexity and explanatory nature? 3. What are the weaknesses of the paper, especially concerning its reliance on analytic assumptions? 4. Can you provide examples of activation functions that meet the analytic assumption? 5. How does the construction of the lower bound instance in the high-dimensional case differ from the one-dimensional case? 6. Why are most references listed in an et al. style, while a few are properly cited? 7. What are the potential negative societal impacts associated with the research presented in the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper considers the representational power of neural network functions. It considers two types of network architectures: the DeepSets architecture, which treats the inputs in a permutation invariant manner; and the Relational Network architecture, which allows for pairwise interaction among the inputs. The question considered in this paper is to compare the representation power of these two types of architectures. The main result is a "width separation" between the DeepSets and the Relational Network architectures. It shows that there exists a function such that for any width less than e x p ( m i n ( i n p u t d i m e n s i o n , i n p u t s e t s i z e ) , the best function from the DeepSets family incurs a constant error. on the other hand, with p o l y ( i n p u t d i m e n s i o n , i n p u t s e t s i z e ) width, the Relational Network could represent the same function up to arbitrary small error. The lower bound for the one-dimensional case is as follows: the "DeepSets" networks with width L restricts the space of functions to within some space of rank relating to L ; so for a "high" rank function, its orthogonal projection to this rank L space could still be quite large The high-dimensional case requires using a high-dimensional powersum polynomials. However, the construction of the lower bound instance is more delicate than the above. Strengths And Weaknesses Strength This paper deals with an important yet technically challenging question. The proof requires sophisticated machinery from symmetric polynomial theory. However, the authors do a great job of explaining their results and building up their proofs, which is instructive for a reader. The writing of the paper is a pleasure to read, with particular attention paid to the proof details. The width separation between DeepSets and Relational Networks would be a significant contribution to the literature; for example, one could imagine that the machinery developed here might help separate architectures in other settings such as graph neural networks, where this sort of permutation invariance appears quite often. Weakness The main result relies on a certain analytic assumption on the activation functions of the neural network (as the authors have discussed in the limitations section). Questions What are some examples of activation functions under the analytic assumption? For the high-dimensional case, the construction of the lower bound instance (in section 5.3) seems to require much more work; A better explanation of this part would be appreciable. Almost all of the references are shown in an et al. style on page 10 and 11, except a few. Is this done on purpose, e.g., should the names of all authors be properly cited there? Limitations Both the limitations and the potential negative societal impact are discussed in the paper.
NIPS
Title Exponential Separations in Symmetric Neural Networks Abstract In this work we demonstrate a novel separation between symmetric neural network architectures. Specifically, we consider the Relational Network [21] architecture as a natural generalization of the DeepSets [32] architecture, and study their representational gap. Under the restriction to analytic activation functions, we construct a symmetric function acting on sets of size N with elements in dimension D, which can be efficiently approximated by the former architecture, but provably requires width exponential in N and D for the latter. N/A In this work we demonstrate a novel separation between symmetric neural network architectures. Specifically, we consider the Relational Network [21] architecture as a natural generalization of the DeepSets [32] architecture, and study their representational gap. Under the restriction to analytic activation functions, we construct a symmetric function acting on sets of size N with elements in dimension D, which can be efficiently approximated by the former architecture, but provably requires width exponential in N and D for the latter. 1 Introduction The modern success of deep learning can in part be attributed to architectures that enforce appropriate invariance. Invariance to permutation of the input, i.e. treating the input as an unordered set, is a desirable property when learning symmetric functions in such fields as particle physics and population statistics. The simplest architectures that enforce permutation invariance treat each set element individually without allowing for interaction, as captured by the popular DeepSet model [18, 32]. Several architectures explicitly enable interaction between set elements, the simplest being the Relational Networks [21] that encode pairwise interaction. This may be understood as an instance of self-attention, the mechanism underlying Transformers [27], which have emerged as powerful generic neural network architectures to process a wide variety of data, from image patches to text to physical data. Specifically, Set Transformers [12] are special instantiations of Transformers, made permutation equivariant by omitting positional encoding of inputs, and using self-attention for pooling. Both the DeepSets and Relational Networks architectures are universal approximators for the class of symmetric functions. But empirical evidence suggests an inherent advantage of symmetric networks using self-attention in synthetic settings [16], on point cloud data [12] and in quantum chemistry [17]. In this work, we formalize this question in terms of approximation power, and explicitly construct symmetric functions which provably require exponentially-many neurons in the DeepSets model, yet are efficiently approximated with self-attention. This exponential separation bears notable differences from typical separation results. In particular, while the expressive power of a vanilla neural network is characterized by depth and width, expressiveness of symmetric networks is controlled particularly by symmetric width. In contrast to depth separations of vanilla neural networks [7], in this work we observe width separations, where the weaker architectures (even with arbitrary depth) require exponential symmetric width to match the expressive power of stronger architectures. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Summary of Contributions In this work: • We demonstrate a width separation between the DeepSets and Relational Network architectures, where the former requires symmetric width L poly(N,D) to approximate a family of analytic symmetric functions, while the latter can approximate with polynomial efficiency. This also answers an open question of high-dimensional DeepSets representation posed in Wagstaff et al. [30] • We introduce an extension of the Hall inner product to high dimensions that preserves low-degree orthogonality of multisymmetric powersum polynomials, which may be of independent interest. 2 Setup and Main Result 2.1 Symmetric Architectures To introduce the symmetric architectures, we must first characterize how to treat sets as inputs. We will consider sets of size N , where each element of the set is a vector of dimension D. In particular, we will represent a set as a matrix X ∈ CD×N . Thus, each column vector xn ∈ CD is an element of the set. Note that we consider complex-valued inputs because the natural inner product over symmetric polynomials integrates over the complex unit circle, see Macdonald [14] or Theorem 4.3. A function f : CD×N → C is symmetric if f(X) = f(XΠ) for any permutation matrix Π ∈ RN×N , i.e. if f is invariant to permuting the columns of X . In other words, a symmetric function treats the input X as an unordered set of column vectors. Given the symmetric width parameter L, we consider two primary symmetric architectures: Definition 2.1. Let SymL denote the class of singleton symmetric networks with symmetric width L, i.e. functions f of the form: f(X) = ρ(φ1(X), . . . , φL(X)) (1) φl(X) = N∑ n=1 ψl(xn) (2) where {ψl : CD → C}Ll=1 and ρ : CL → C are arbitrary neural networks with analytic activations. The class SymL is exactly the architecture of DeepSets [32] restricted to analytic activations. However, we introduce this notation to differentiate this class from the more expressive architectures that allow for pairwise interaction among set elements. From the theory of symmetric polynomials, if L ≥ L∗ := ( N+D N ) − 1, then f ∈ SymL is a universal approximator for any analytic symmetric function [19]. Therefore we will primarily be interested in the expressive power of SymL for L < L ∗. Definition 2.2. Let Sym2L denote the class of pairwise symmetric networks with symmetric width L, i.e. functions f of the form: f(X) = ρ(φ1(X), . . . , φL(X)) (3) φl(X) = N∑ n,n′=1 ψl(xn, xn′) (4) where {ψl : CD×D → C}Ll=1 and ρ : CL → C are arbitrary neural networks with analytic activations. Similarly, the class Sym2L is exactly the architecture of Relational Pooling [21] with analytic activations. We note this architecture is also equivalent to the 2-ary instantiation of Janossy Pooling [16]. 2.2 Main Result Our main result demonstrates an exponential separation, where SymL requires exponentially large symmetric width L to match the expressive power of the class Sym2L for L = 1. We choose norms to make this separation as prominent as possible: there is a hard function that can be approximated in Sym2L in the infinity norm, but cannot be approximated in SymL even in an appropriately chosen L2 norm with respect to some non-trivial data distribution. We require one activation assumption to realize the Sym2L approximation: Assumption 2.3. The activation σ : C → C is analytic, and for a fixed D,N there exist twolayer neural networks f1, f2 using σ, both with O ( D2 +D log D ) width and O(D logD) bounded weights, such that: sup |ξ|≤3 |f1(ξ)− ξ2| ≤ , sup |ξ|≤3 ∣∣∣∣f2(ξ)− (1− (ξ/4)min(D,√N/2)) ξ − 1/4ξ/4− 1 ∣∣∣∣ ≤ (5) Essentially this assumption guarantees that networks built with the analytic activation σ are able to efficiently approximate the map ξ → ξ2, and, a truncated form of the finite Blaschke product[8] with one zero at ξ = 4. We show in Lemma G.3 that the exp activation satisfies this assumption. Theorem 2.4 (Exponential width-separation). Fix N and D > 1, and a non-trivial data distribution µ on D ×N copies of the unit complex circle (S1)D×N . Then there exists an analytic symmetric function g : CD×N → C such that ‖g‖L2(µ) = 1 and: • For L ≤ N−2 exp(O(min(D, √ N)), min f∈SymL ‖f − g‖2L2(µ) ≥ 1 12 . (6) • There exists f ∈ Sym2L with L = 1, parameterized with an activation σ that satisfies Assumption 2.3, with width poly(N,D, 1/ ), depth O(logD), and max weight O(D logD) such that over (S1)D×N : ‖f − g‖∞ ≤ (7) Remark 1. The lower bound is completely independent of the width and depth of the parameterized networks {ψl} and ρ. The only parameter that the theorem restricts is the symmetric width L. This is in sharp contrast to the separations of vanilla networks [7], where there is a natural trade-off between width and depth. Remark 2. In the upper bound, we consider the network f ∈ Sym2L to have width and depth in the usual sense of vanilla neural networks, where the parameterized maps {ψl} and ρ obey the width, depth, and weight bounds given. 3 Related Work 3.1 Depth Separation Numerous works have studied the difference in expressive power between different neural network architectures. Many of these works center on the representational gap between two-layer and threelayer networks [4, 7]. In particular, recent works have focused on generalizing the family of functions that realize these separations, to various radial functions [20] and non-radial functions [28]. A separate line of work considers separations between networks when the depth varies polynomially [24]. Notably, Vardi, Yehudai, and Shamir [26] demonstrates that depth has a greater impact on expressivity than width, in the case of vanilla neural networks. 3.2 Symmetric Architectures We primarily consider the symmetric neural network parameterization as introduced in DeepSets[32], with PointNet[18] a similar symmetric parameterization using a different pooling function. Simple linear equivariant layers were also introduced in Zaheer et al. [32]. In the context of relationships between objects in an image, the first symmetric architecture enabling explicit pairwise interaction was introduced in Santoro et al. [21]. More complicated symmetric architectures, allowing for higher-order interaction and more substantial equivariant layers, were built on top of attention primitives [12, 13]. And the notion of explicit high-order interactions between set elements before symmetrizing is formalized in the architecture of Janossy pooling [16]. Symmetric architectures are generalized by graph neural networks [10, 22], under the restriction to the complete graph. 3.3 Symmetric Network Expressivity The dependence of representational power on the symmetric width parameter Lwas first demonstrated in the D = 1 case. Under the strong condition L < N , it was proven there are symmetric functions which cannot be exactly represented by a DeepSets network [29], and this was later strengthened to functions which cannot be approximated in the infinity norm to arbitrary precision [30]. The work introducing Janossy pooling [16] also includes a theoretical result showing singleton networks cannot exactly represent some particular pairwise symmetric network. Crucially however, this result is restricted to a simplified, non-universal symmetric architecture excluding the ρ transformation, and therefore does not characterize the real-world architectures given above. The question of expressiveness in symmetric networks may also be generalized to graph neural networks, with a focus on distinguishing non-isomorphic graphs as compared to the WeissfelerLehman test[31] and calculating invariants such as substructure counting[3]. In particular, one may understand expressiveness in symmetric networks incorporating pairwise interaction as the ability to learn functions of the complete graph decorated with edge features. 3.4 Symmetric Polynomial Theory Our proofs rely on the technical machinery of symmetric polynomial theory, thoroughly characterized in Macdonald [14]. In particular, we utilize the integral representation of the finite-variable Hall Inner product as introduced in Section A. Because this integral is defined over the complex unit circle, we consequently consider complex-valued neural networks [1]. The connection of symmetric networks to the powersum polynomials was first observed in Zaheer et al. [32], and likewise the multisymmetric powersum polynomials have been applied in higher dimensional symmetric problems [15, 23]. The algebraic properties of the multisymmetric powersum polynomials are well-studied, for example as a basis of higher dimensional symmetric polynomials [19] and through their algebraic dependencies [6]. However, to the best of our knowledge this is the first work to apply the Hall inner product to symmetric neural networks, and to extend this inner product to yield low-degree orthogonality over the multisymmetric polynomials. 4 Warmup: One-dimensional set elements To begin, we consider the simpler case where D = 1, i.e. where we learn a symmetric function acting on a set of scalars. It was already observed in Zaheer et al. [32] that the universality of DeepSets could be demonstrated by approximating the network with symmetric polynomials. We first demonstrate that through this approximation, we can relate the symmetric width L to expressive power. 4.1 Symmetric Polynomials In order to approximate symmetric networks by symmetric polynomials, we choose a suitable basis. The powersum polynomials serve as the natural choice, as their structure matches that of a singleton symmetric network, and they obey very nice orthogonality properties that we detail below. Definition 4.1. For k ∈ N and x ∈ CN , the normalized powersum polynomial is defined as pk(x) = 1√ k N∑ n=1 xkn with p0(x) = 1. A classical result in symmetric polynomial theory is the existence of an L2 inner product that grants orthogonality for products of powersums. To make this notion explicit and keep track of products, we index products with partitions. Definition 4.2. An integer partition λ is non-increasing, finite sequence of positive integers λ1 ≥ λ2 ≥ · · · ≥ λk. The weight of the partition is given by |λ| = ∑k i=1 λi. The length of a partition l(λ) is the number of terms in the sequence. Then we characterize a product of powersums by: pλ(x) = ∏ i pλi(x) (8) This notation intentionally also allows for the empty partition, such that if λ = ∅ then pλ = 1. All together, we can now state the following remarkable fact: Theorem 4.3 ([14, Chapter VI (9.10)] ). There exists a L2(dν) inner product (for some probability measure ν) such that, for partitions λ, µ with |λ| ≤ N : 〈pλ, pµ〉V = zλ1λ=µ (9) where zλ is some combinatorial constant. We index this inner product with V because it is written as an expectation with respect to a density proportional to the squared Vandermonde polynomial (see Section A for the precise definition). This inner product may also be considered the finite-variable specialization of the Hall inner product, defined on symmetric polynomials over infinitely many variables [14, Chapter I (4.5)]. It’s easy to check that the degree of pλ is equal to |λ|. So this theorem states that the powersum terms pλ are "almost" an orthogonal basis, except for correlation between two high-degree terms. Let us remark that we assume analytic activations for the sake of this theorem, as the orthogonality property does not hold for symmetric polynomials with negative exponents. However, in exchange for that assumption we can apply this very powerful inner product, that ultimately results in the irrelevance of network depth. 4.2 Projection Lemma Before we can proceed to prove a representational lower bound, we need one tool to better understand f ∈ SymL. Utilizing the orthogonality properties of the inner product 〈·, ·〉V allows us to project any f ∈ SymL to a simplified form, while keeping a straightforward dependence on L. For example, consider some uniformly convergent power series (with no constant term) φ(x) =∑∞ i=1 cikpk(x). We claim 〈p2p1, φ3〉V = 0. Indeed, expanding φ3, one exclusively gets terms of the form pk1pk2pk3 , and because the partition {k1, k2, k3} is of a different length than {2, 1}, they are clearly distinct partitions so by orthogonality 〈p2p1, pk1pk2pk3〉V = 0. Motivated by this observation, we can project f to only contain products of two terms. Let us introduce P1 to be the orthogonal projection onto span({pt : 1 ≤ t ≤ N/2}), and P2 to be the orthogonal projection onto span({ptpt′ : 1 ≤ t, t′ ≤ N/2}). Lemma 4.4. Given any f ∈ SymL, we may choose coefficients vij over i ≤ j ≤ L, and symmetric polynomials φi over i ≤ L, such that: P2f = L∑ i≤j vij(P1φi)(P2φj) (10) 4.3 Rank Lemma Given the reduced form of f above, we may now go about lower bounding its approximation error to a given function g. By the properties of orthogonal projection, we have ‖f − g‖2V ≥ ‖P2(f − g)‖2V . And by Parseval’s theorem, the function approximation error ‖P2f − P2g‖2V equals∑ t≤t′ (〈 P2f, ptpt′ ‖ptpt′‖V 〉 V − 〈 P2g, ptpt′ ‖ptpt′‖V 〉 V )2 . Rearranging the orthogonal coefficients in the form of matrices, we have the following fact: Lemma 4.5. Given any f ∈ SymL, and g such that P2g = g, we have the bound ‖P2f − P2g‖2V ≥ 1 2 ‖F −G‖2F (11) where F,G ∈ CN/2×N/2 are matrices with entries Ftt′ = 〈P2f, ptpt′〉V , Gtt′ = 〈P2g, ptpt′〉V . Furthermore, F has maximum rank L. The significance of this lemma is the rank constraint: it implies that choosing symmetric width L corresponds to a maximum rank L on the matrix F . From here, we can use standard arguments about low-rank approximation in the Frobenius norm to yield a lower bound. 4.4 Separation in one-dimensional case Our main goal in this section is to construct a hard symmetric function g that cannot be efficiently approximated by SymL for L ≤ N/4. It is not particularly expensive for the symmetric width L to scale linearly with the set sizeN : however, we will use the same proof structure to prove Theorem 2.4, which will require L to scale exponentially. Theorem 4.6. For D = 1: max ‖g‖V =1 min f∈SymL ‖f − g‖2V ≥ 1− 2L N (12) In particular, for L = N4 we recover a constant lower bound of 1 2 . Proof (sketch). Choose g such that P2g = g. Then because P2 is an orthogonal projection and applying Lemma 4.5: min f∈SymL ‖f − g‖2V ≥ min f∈SymL ‖P2f − P2g‖2V (13) ≥ 1 2 min rank(F )≤L ‖F −G‖2F (14) We note that ‖ptpt‖2V = z{t,t} = 2, so the choice of g = 1√N ∑N/2 t=1 ptpt can be seen to obey ‖g‖V = 1, and implies that G is the scaled identity matrix 2√N I ∈ C N/2×N/2. Then by standard properties of the SVD: min f∈SymL ‖f − g‖2V ≥ 1 2 min rank(F )≤L ‖F − 2√ N I‖2F (15) = 1 N/2 min rank(F )≤L ‖F − I‖2F (16) = 1 N/2 (N/2− L) (17) = 1− 2L N (18) 5 Proof Sketch of Main Result 5.1 Challenges for High-dimensional Set Elements We’d like to strengthen this separation in several ways: • Generalize to the D > 1 case, • Realize a separation where the symmetric width L must scale exponentially in N and D, showing that SymL is infeasible, • Show the hard function g can nevertheless be efficiently approximated in Sym2L for L polynomial in N and D First, in order to approximate via polynomials in the high-dimenionsal case, we will require the high-dimensional analogue of powersum polynomials: Definition 5.1. For a multi-index α ∈ ND, the normalized multisymmetric powersum polynomial is defined as: pα(X) = 1√ |α| ∑ n ∏ d xαddn . (19) So the plan is to find a high-dimensional analogue of Lemma 4.4 and Lemma 4.5, now using multisymmetric powersum polynomials, mimic the proof of the D = 1 case, and then additionally show the hard function g is efficiently computable in the pairwise symmetric architecture. Note that because the algebraic basis of multisymmetric powersum polynomials is of size L∗ = ( N+D N ) − 1, we can expect an exponential separation when we apply a similar rank argument.1 1We subtract one in order to discount the constant polynomial. 5.2 Sketch of Main result (lower bound) Because we are in high dimensions, we cannot simply apply the restricted Hall inner product introduced in Theorem 4.3. To the best of our knowledge, there is no standard generalization of the Hall inner product to multi-symmetric polynomials that preserves the orthogonality property. For the main technical ingredient in the high-dimensional case we introduce a novel generalization, which builds on two inner products. First, we introduce a new input distribution ν over set inputs X ∈ CD×N , and induce an L2 inner product: 〈f, g〉A = EX∼ν [ f(X)g(X) ] . (20) We use this inner product to measure the approximation error of SymL. That is, we seek a lower bound to minf∈SymL ‖f − g‖A, for a suitable choice of hard function g. We can now apply an analogue of Lemma 4.4 to project f to a simplified form. But we cannot immediately apply an analogue of Lemma 4.5, as it relied on Parseval’s theorem and the low-degree multisymmetric powersum polynomials are not orthogonal in this inner product. Put another way, if we represent 〈·, ·〉A as a matrix in the basis of low-degree multisymmetric powersums, it will be positive-definite but include some off-diagonal terms. The idea is to now introduce a new inner product with a different input distribution ν0 〈f, g〉A0 = EX∼ν0 [ f(X)g(X) ] , (21) and define the bilinear form 〈f, g〉∗ = 〈f, g〉A − 2〈f, g〉A0 . (22) Typically positive-definiteness is lost when subtracting two inner products, but we prove that 〈·, ·〉∗ is an inner product when restricted to a particular subspace of symmetric polynomials (see Theorem D.3). Furthermore, the careful choice of ν and ν0 cancels the off-diagonal correlation of different multisymmetric powersums, so they are orthogonal under this new inner product 〈·, ·〉∗. By the norm domination ‖ · ‖A ≥ ‖ · ‖∗, we are able to pass from the former L2 norm to the latter norm that obeys orthogonality, and apply an analogue of the Rank Lemma 4.5. Thus we derive a lower bound using any hard function g whose corresponding matrix G (built from orthogonal coefficients) is diagonal and high-rank. And because the total number of polynomials is L∗, the rank argument now yields an exponential separation. Based on this proof, we have much freedom in our choice of g. By choosing its coefficients in the basis of multisymmetric powersum polynomials, it’s easy to enforce the conditions that G is diagonal and high-rank for variety of possible functions. However, ensuring that g is not pathological (i.e. that it is bounded and Lipschitz), and can be efficiently approximated in Sym2L, requires a more careful choice. 5.3 Sketch of Main Result (upper bound) It remains to approximate the hard function g with a network from Sym2L. First we must make a choice of g in particular. Based on the lower bound proof, the desiderata for g is that it is supported exclusively on terms of the form pαpα over many values of α, as this induces a diagonal and high-rank matrix G in an analogue of Lemma 4.5. Furthermore, by simple algebra one can confirm that pα(X)pα(X) = 1 |α| ∑ n,n′ ∏D d=1(xdnxdn′) αd , so g supported on these polynomials can clearly be written in the form of a network in Sym2L. This structure of g guarantees difficult approximation, and is akin to the radial structure of the hard functions introduced in works on depth separation [7]. We must however be careful in our choice of g: for the matrix G to be high-rank, g must be supported on exponentially many powersum polynomials. But this could make ‖g‖∞ exponentially large, and therefore challenging to approximate efficiently with a network from Sym2L. We handle this difficulty by defining g in a different way. We introduce a finite Blaschke product µ(ξ) = ξ−1/4ξ/4−1 , a function that analytically maps the unit complex circle to itself. Then the choice g(X) = N∑ n,n′=1 D∏ d=1 µ(xdnxdn′) (23) ensures that ‖g‖∞, ‖g‖A, and Lip(g) are all polynomial in N,D, 1 for approximation error (see Lemma E.3). Furthermore, again from simple algebra it is clear that g is only supported on terms of the form pαpα. So it remains to show that the induced diagonal matrix G is effectively high rank, which follows from expanding the Blaschke products. Satisfied that this choice of g will meet the desiderata for the lower bound, and has no pathological behavior, it remains to construct f ∈ Sym2L for L = 1 that approximates g. That is, choose ψ1 and ρ so that g(X) ≈ ρ (∑N n,n′=1 ψ1(xn, xn′) ) . Clearly we may take ρ to be the identity, and ψ1(xn, xn′) to approximate ∏D d=1 µ(xdnxdn′), which is straightforwardly calculated in depth O(logD) by performing successive multiplications in a binary-tree like structure (see Theorem F.1). Ultimately, we use a slight variant of this function for the formal proof. Because the orthogonality of our newly introduced inner product 〈·, ·〉∗ only holds for low-degree polynomials, we must truncate high-degree terms of g; we confirm in Appendix F that this truncation nevertheless preserves the properties we care about. 6 Discussion In this work, we’ve demonstrated how symmetric width captures more of the expressive power of symmetric networks than depth when restricted to analytic activations, by evincing an exponential separation between two of the most common architectures that enforce permutation invariance. The most unusual property of this result is the complete independence of depth, owing to the unique orthogonality properties of the restricted Hall inner product when paired with the assumption of analyticity. This stands in contrast to the case of vanilla neural networks, for which separations beyond small depth would resolve open questions in circuit complexity suspected to be quite hard [25]. Furthermore, the greater dependence on width than depth is a unique property to symmetric networks, whereas the opposite is true for vanilla networks [26]. A natural extension would be to consider the simple equivariant layers introduced in Zaheer et al. [32], which we suspect will not substantially improve approximation power of SymL. Furthermore, allowing for multiple such equivariant layers, this network becomes exactly akin to a Graph Convolutional Network [10] on a complete graph, whereas Sym2L corresponds to a message passing network [9] as it is capable of interpreting edge features. 6.1 Limitations The major limitation of this result is the restriction to analytic functions. Although analytic symmetric functions nevertheless appear crucially in the study of exactly solvable quantum systems [2, 11], this assumption may be be overly strict for general problems of learning symmetric functions. We nevertheless conjecture that these bounds will still hold even allowing for non-analytic activations, and consider this an exciting question for future work. Additionally, whether the hard function g can be efficiently learned with gradient descent remains unclear, and future work could touch on the learnability. Acknowledgements: This work has been partially supported by the Alfred P. Sloan Foundation, NSF RI-1816753, NSF CAREER CIF-1845360, and NSF CCF-1814524.
1. What is the focus and contribution of the paper regarding symmetric neural networks? 2. What are the strengths of the proposed approach, particularly in terms of expressivity capabilities and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its limitations and comparisons with other works? 4. Do you have any questions regarding the paper's methodology or results? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper studies expressivity capabilities of neural networks that belong to the class of symmetric neural networks. Two most prominent examples that are used as motivation for the current work is the so-called DeepSets and Relational Networks. Even though these are universal approximators for representing symmetric functions, what are the depth/width tradeoffs and approximation guarantees? This question reflects the analogous questions for standard feedforward neural networks. The main contribution of the paper is to explicitly construct symmetric functions which provably require exponentially-many neurons in the DeepSets model, yet are efficiently approximated with self-attention. The crucial parameter controling the expressivity of the network here, is the so-called "symmetric width" and this leads to a conceptually different set of results in comparison to standard depth vs width tradeoffs. The difference in the two architectures is presented by eq. (2) vs eq. (4), where the latter allows for pairwise interactions among set elements. Under plausible assumptions, the main result of the paper is formally Theorem 2.4 is to provide a family of analytic symmetric functions g which leads to two important properties in order to prove separation results. The first part says that "singleton symmetric networks" (i.e., those that don't allow for pairwise interactions) are insufficient to approximate g unless they are exponentially large in terms of their symmetric width, whereas the second part says that a simple "pairwise symmetric network" will incur negligible loss when trying to approximate g. The notions of error used are: for the lower bound in the first part, the authors used the notion of L2 error under a suitable data distribution, and for the second part in the upper bound they use the infinity norm error. Strengths And Weaknesses -solid contribution for theory of expressivity in neural nets that offers new perspective and techniques to a different set of architectures. -conceptually interesting the fact that the result bears differences with standard results of feedforward neural nets separations Weaknesses: -limited literature comparisons e.g., Chulhee Yun et al. ("Are Transformers universal approximators of sequence-to-sequence functions?")? -L2 error bound instead of L1 error bound? -please see questions below. Overall, the reviewer thinks that the paper has to offer sth new to the much needed theory of neural nets for symmetic function representation and that the paper does so in a technically solid manner. Questions -complex values vs real values? One question that comes up has to do with the fact that the input is complex-valued. How does this affect the statement of the result for real-valued neural nets? Ie does the construction break down if we only cared about real valued networks? -omissions in literature/failed to do comparisons: In Chulhee Yun et al. ("Are Transformers universal approximators of sequence-to-sequence functions?") the authors study sequence-to-sequence models and ask about the approximation/representation capabilities. Why did the authors fail to compare against this paper, given that Set Transformers are special instantiations of Transformers? There are also works in expressivity that manage to show separations based on topological properties of the function-to-be-represented. These hold for feedforward neural nets and use zonotope theory (see UNDERSTANDING DEEP NEURAL NETWORKS WITH RECTIFIED LINEAR UNITS by Arora et al.) or fixed-point arguments in dynamical systems (see Better Depth-Width Trade-offs for Neural Networks through the lens of Dynamical Systems by Chatziafratis et al.). Is there hope to use approach taken there to show separations for symmetric nets? What is the crucial property of symmetric nets being exploited in the present paper? Is the main observation of the paper the fact that the pairwise interaction is a "complexity" measure of sorts, that cannot be simulated by the "singleton symmetric net"? Can the authors elaborate more on why the L2 error lower bound was used in eq. (6) rather than the stronger lower bound of L1? From Telgarsky's work cited (Benefits of depth in neural networks) it seems that L1 is the stronger guarantee. What is the main technical bottleneck for obtaining L2? Thank you! Limitations N/A
NIPS
Title Exponential Separations in Symmetric Neural Networks Abstract In this work we demonstrate a novel separation between symmetric neural network architectures. Specifically, we consider the Relational Network [21] architecture as a natural generalization of the DeepSets [32] architecture, and study their representational gap. Under the restriction to analytic activation functions, we construct a symmetric function acting on sets of size N with elements in dimension D, which can be efficiently approximated by the former architecture, but provably requires width exponential in N and D for the latter. N/A In this work we demonstrate a novel separation between symmetric neural network architectures. Specifically, we consider the Relational Network [21] architecture as a natural generalization of the DeepSets [32] architecture, and study their representational gap. Under the restriction to analytic activation functions, we construct a symmetric function acting on sets of size N with elements in dimension D, which can be efficiently approximated by the former architecture, but provably requires width exponential in N and D for the latter. 1 Introduction The modern success of deep learning can in part be attributed to architectures that enforce appropriate invariance. Invariance to permutation of the input, i.e. treating the input as an unordered set, is a desirable property when learning symmetric functions in such fields as particle physics and population statistics. The simplest architectures that enforce permutation invariance treat each set element individually without allowing for interaction, as captured by the popular DeepSet model [18, 32]. Several architectures explicitly enable interaction between set elements, the simplest being the Relational Networks [21] that encode pairwise interaction. This may be understood as an instance of self-attention, the mechanism underlying Transformers [27], which have emerged as powerful generic neural network architectures to process a wide variety of data, from image patches to text to physical data. Specifically, Set Transformers [12] are special instantiations of Transformers, made permutation equivariant by omitting positional encoding of inputs, and using self-attention for pooling. Both the DeepSets and Relational Networks architectures are universal approximators for the class of symmetric functions. But empirical evidence suggests an inherent advantage of symmetric networks using self-attention in synthetic settings [16], on point cloud data [12] and in quantum chemistry [17]. In this work, we formalize this question in terms of approximation power, and explicitly construct symmetric functions which provably require exponentially-many neurons in the DeepSets model, yet are efficiently approximated with self-attention. This exponential separation bears notable differences from typical separation results. In particular, while the expressive power of a vanilla neural network is characterized by depth and width, expressiveness of symmetric networks is controlled particularly by symmetric width. In contrast to depth separations of vanilla neural networks [7], in this work we observe width separations, where the weaker architectures (even with arbitrary depth) require exponential symmetric width to match the expressive power of stronger architectures. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Summary of Contributions In this work: • We demonstrate a width separation between the DeepSets and Relational Network architectures, where the former requires symmetric width L poly(N,D) to approximate a family of analytic symmetric functions, while the latter can approximate with polynomial efficiency. This also answers an open question of high-dimensional DeepSets representation posed in Wagstaff et al. [30] • We introduce an extension of the Hall inner product to high dimensions that preserves low-degree orthogonality of multisymmetric powersum polynomials, which may be of independent interest. 2 Setup and Main Result 2.1 Symmetric Architectures To introduce the symmetric architectures, we must first characterize how to treat sets as inputs. We will consider sets of size N , where each element of the set is a vector of dimension D. In particular, we will represent a set as a matrix X ∈ CD×N . Thus, each column vector xn ∈ CD is an element of the set. Note that we consider complex-valued inputs because the natural inner product over symmetric polynomials integrates over the complex unit circle, see Macdonald [14] or Theorem 4.3. A function f : CD×N → C is symmetric if f(X) = f(XΠ) for any permutation matrix Π ∈ RN×N , i.e. if f is invariant to permuting the columns of X . In other words, a symmetric function treats the input X as an unordered set of column vectors. Given the symmetric width parameter L, we consider two primary symmetric architectures: Definition 2.1. Let SymL denote the class of singleton symmetric networks with symmetric width L, i.e. functions f of the form: f(X) = ρ(φ1(X), . . . , φL(X)) (1) φl(X) = N∑ n=1 ψl(xn) (2) where {ψl : CD → C}Ll=1 and ρ : CL → C are arbitrary neural networks with analytic activations. The class SymL is exactly the architecture of DeepSets [32] restricted to analytic activations. However, we introduce this notation to differentiate this class from the more expressive architectures that allow for pairwise interaction among set elements. From the theory of symmetric polynomials, if L ≥ L∗ := ( N+D N ) − 1, then f ∈ SymL is a universal approximator for any analytic symmetric function [19]. Therefore we will primarily be interested in the expressive power of SymL for L < L ∗. Definition 2.2. Let Sym2L denote the class of pairwise symmetric networks with symmetric width L, i.e. functions f of the form: f(X) = ρ(φ1(X), . . . , φL(X)) (3) φl(X) = N∑ n,n′=1 ψl(xn, xn′) (4) where {ψl : CD×D → C}Ll=1 and ρ : CL → C are arbitrary neural networks with analytic activations. Similarly, the class Sym2L is exactly the architecture of Relational Pooling [21] with analytic activations. We note this architecture is also equivalent to the 2-ary instantiation of Janossy Pooling [16]. 2.2 Main Result Our main result demonstrates an exponential separation, where SymL requires exponentially large symmetric width L to match the expressive power of the class Sym2L for L = 1. We choose norms to make this separation as prominent as possible: there is a hard function that can be approximated in Sym2L in the infinity norm, but cannot be approximated in SymL even in an appropriately chosen L2 norm with respect to some non-trivial data distribution. We require one activation assumption to realize the Sym2L approximation: Assumption 2.3. The activation σ : C → C is analytic, and for a fixed D,N there exist twolayer neural networks f1, f2 using σ, both with O ( D2 +D log D ) width and O(D logD) bounded weights, such that: sup |ξ|≤3 |f1(ξ)− ξ2| ≤ , sup |ξ|≤3 ∣∣∣∣f2(ξ)− (1− (ξ/4)min(D,√N/2)) ξ − 1/4ξ/4− 1 ∣∣∣∣ ≤ (5) Essentially this assumption guarantees that networks built with the analytic activation σ are able to efficiently approximate the map ξ → ξ2, and, a truncated form of the finite Blaschke product[8] with one zero at ξ = 4. We show in Lemma G.3 that the exp activation satisfies this assumption. Theorem 2.4 (Exponential width-separation). Fix N and D > 1, and a non-trivial data distribution µ on D ×N copies of the unit complex circle (S1)D×N . Then there exists an analytic symmetric function g : CD×N → C such that ‖g‖L2(µ) = 1 and: • For L ≤ N−2 exp(O(min(D, √ N)), min f∈SymL ‖f − g‖2L2(µ) ≥ 1 12 . (6) • There exists f ∈ Sym2L with L = 1, parameterized with an activation σ that satisfies Assumption 2.3, with width poly(N,D, 1/ ), depth O(logD), and max weight O(D logD) such that over (S1)D×N : ‖f − g‖∞ ≤ (7) Remark 1. The lower bound is completely independent of the width and depth of the parameterized networks {ψl} and ρ. The only parameter that the theorem restricts is the symmetric width L. This is in sharp contrast to the separations of vanilla networks [7], where there is a natural trade-off between width and depth. Remark 2. In the upper bound, we consider the network f ∈ Sym2L to have width and depth in the usual sense of vanilla neural networks, where the parameterized maps {ψl} and ρ obey the width, depth, and weight bounds given. 3 Related Work 3.1 Depth Separation Numerous works have studied the difference in expressive power between different neural network architectures. Many of these works center on the representational gap between two-layer and threelayer networks [4, 7]. In particular, recent works have focused on generalizing the family of functions that realize these separations, to various radial functions [20] and non-radial functions [28]. A separate line of work considers separations between networks when the depth varies polynomially [24]. Notably, Vardi, Yehudai, and Shamir [26] demonstrates that depth has a greater impact on expressivity than width, in the case of vanilla neural networks. 3.2 Symmetric Architectures We primarily consider the symmetric neural network parameterization as introduced in DeepSets[32], with PointNet[18] a similar symmetric parameterization using a different pooling function. Simple linear equivariant layers were also introduced in Zaheer et al. [32]. In the context of relationships between objects in an image, the first symmetric architecture enabling explicit pairwise interaction was introduced in Santoro et al. [21]. More complicated symmetric architectures, allowing for higher-order interaction and more substantial equivariant layers, were built on top of attention primitives [12, 13]. And the notion of explicit high-order interactions between set elements before symmetrizing is formalized in the architecture of Janossy pooling [16]. Symmetric architectures are generalized by graph neural networks [10, 22], under the restriction to the complete graph. 3.3 Symmetric Network Expressivity The dependence of representational power on the symmetric width parameter Lwas first demonstrated in the D = 1 case. Under the strong condition L < N , it was proven there are symmetric functions which cannot be exactly represented by a DeepSets network [29], and this was later strengthened to functions which cannot be approximated in the infinity norm to arbitrary precision [30]. The work introducing Janossy pooling [16] also includes a theoretical result showing singleton networks cannot exactly represent some particular pairwise symmetric network. Crucially however, this result is restricted to a simplified, non-universal symmetric architecture excluding the ρ transformation, and therefore does not characterize the real-world architectures given above. The question of expressiveness in symmetric networks may also be generalized to graph neural networks, with a focus on distinguishing non-isomorphic graphs as compared to the WeissfelerLehman test[31] and calculating invariants such as substructure counting[3]. In particular, one may understand expressiveness in symmetric networks incorporating pairwise interaction as the ability to learn functions of the complete graph decorated with edge features. 3.4 Symmetric Polynomial Theory Our proofs rely on the technical machinery of symmetric polynomial theory, thoroughly characterized in Macdonald [14]. In particular, we utilize the integral representation of the finite-variable Hall Inner product as introduced in Section A. Because this integral is defined over the complex unit circle, we consequently consider complex-valued neural networks [1]. The connection of symmetric networks to the powersum polynomials was first observed in Zaheer et al. [32], and likewise the multisymmetric powersum polynomials have been applied in higher dimensional symmetric problems [15, 23]. The algebraic properties of the multisymmetric powersum polynomials are well-studied, for example as a basis of higher dimensional symmetric polynomials [19] and through their algebraic dependencies [6]. However, to the best of our knowledge this is the first work to apply the Hall inner product to symmetric neural networks, and to extend this inner product to yield low-degree orthogonality over the multisymmetric polynomials. 4 Warmup: One-dimensional set elements To begin, we consider the simpler case where D = 1, i.e. where we learn a symmetric function acting on a set of scalars. It was already observed in Zaheer et al. [32] that the universality of DeepSets could be demonstrated by approximating the network with symmetric polynomials. We first demonstrate that through this approximation, we can relate the symmetric width L to expressive power. 4.1 Symmetric Polynomials In order to approximate symmetric networks by symmetric polynomials, we choose a suitable basis. The powersum polynomials serve as the natural choice, as their structure matches that of a singleton symmetric network, and they obey very nice orthogonality properties that we detail below. Definition 4.1. For k ∈ N and x ∈ CN , the normalized powersum polynomial is defined as pk(x) = 1√ k N∑ n=1 xkn with p0(x) = 1. A classical result in symmetric polynomial theory is the existence of an L2 inner product that grants orthogonality for products of powersums. To make this notion explicit and keep track of products, we index products with partitions. Definition 4.2. An integer partition λ is non-increasing, finite sequence of positive integers λ1 ≥ λ2 ≥ · · · ≥ λk. The weight of the partition is given by |λ| = ∑k i=1 λi. The length of a partition l(λ) is the number of terms in the sequence. Then we characterize a product of powersums by: pλ(x) = ∏ i pλi(x) (8) This notation intentionally also allows for the empty partition, such that if λ = ∅ then pλ = 1. All together, we can now state the following remarkable fact: Theorem 4.3 ([14, Chapter VI (9.10)] ). There exists a L2(dν) inner product (for some probability measure ν) such that, for partitions λ, µ with |λ| ≤ N : 〈pλ, pµ〉V = zλ1λ=µ (9) where zλ is some combinatorial constant. We index this inner product with V because it is written as an expectation with respect to a density proportional to the squared Vandermonde polynomial (see Section A for the precise definition). This inner product may also be considered the finite-variable specialization of the Hall inner product, defined on symmetric polynomials over infinitely many variables [14, Chapter I (4.5)]. It’s easy to check that the degree of pλ is equal to |λ|. So this theorem states that the powersum terms pλ are "almost" an orthogonal basis, except for correlation between two high-degree terms. Let us remark that we assume analytic activations for the sake of this theorem, as the orthogonality property does not hold for symmetric polynomials with negative exponents. However, in exchange for that assumption we can apply this very powerful inner product, that ultimately results in the irrelevance of network depth. 4.2 Projection Lemma Before we can proceed to prove a representational lower bound, we need one tool to better understand f ∈ SymL. Utilizing the orthogonality properties of the inner product 〈·, ·〉V allows us to project any f ∈ SymL to a simplified form, while keeping a straightforward dependence on L. For example, consider some uniformly convergent power series (with no constant term) φ(x) =∑∞ i=1 cikpk(x). We claim 〈p2p1, φ3〉V = 0. Indeed, expanding φ3, one exclusively gets terms of the form pk1pk2pk3 , and because the partition {k1, k2, k3} is of a different length than {2, 1}, they are clearly distinct partitions so by orthogonality 〈p2p1, pk1pk2pk3〉V = 0. Motivated by this observation, we can project f to only contain products of two terms. Let us introduce P1 to be the orthogonal projection onto span({pt : 1 ≤ t ≤ N/2}), and P2 to be the orthogonal projection onto span({ptpt′ : 1 ≤ t, t′ ≤ N/2}). Lemma 4.4. Given any f ∈ SymL, we may choose coefficients vij over i ≤ j ≤ L, and symmetric polynomials φi over i ≤ L, such that: P2f = L∑ i≤j vij(P1φi)(P2φj) (10) 4.3 Rank Lemma Given the reduced form of f above, we may now go about lower bounding its approximation error to a given function g. By the properties of orthogonal projection, we have ‖f − g‖2V ≥ ‖P2(f − g)‖2V . And by Parseval’s theorem, the function approximation error ‖P2f − P2g‖2V equals∑ t≤t′ (〈 P2f, ptpt′ ‖ptpt′‖V 〉 V − 〈 P2g, ptpt′ ‖ptpt′‖V 〉 V )2 . Rearranging the orthogonal coefficients in the form of matrices, we have the following fact: Lemma 4.5. Given any f ∈ SymL, and g such that P2g = g, we have the bound ‖P2f − P2g‖2V ≥ 1 2 ‖F −G‖2F (11) where F,G ∈ CN/2×N/2 are matrices with entries Ftt′ = 〈P2f, ptpt′〉V , Gtt′ = 〈P2g, ptpt′〉V . Furthermore, F has maximum rank L. The significance of this lemma is the rank constraint: it implies that choosing symmetric width L corresponds to a maximum rank L on the matrix F . From here, we can use standard arguments about low-rank approximation in the Frobenius norm to yield a lower bound. 4.4 Separation in one-dimensional case Our main goal in this section is to construct a hard symmetric function g that cannot be efficiently approximated by SymL for L ≤ N/4. It is not particularly expensive for the symmetric width L to scale linearly with the set sizeN : however, we will use the same proof structure to prove Theorem 2.4, which will require L to scale exponentially. Theorem 4.6. For D = 1: max ‖g‖V =1 min f∈SymL ‖f − g‖2V ≥ 1− 2L N (12) In particular, for L = N4 we recover a constant lower bound of 1 2 . Proof (sketch). Choose g such that P2g = g. Then because P2 is an orthogonal projection and applying Lemma 4.5: min f∈SymL ‖f − g‖2V ≥ min f∈SymL ‖P2f − P2g‖2V (13) ≥ 1 2 min rank(F )≤L ‖F −G‖2F (14) We note that ‖ptpt‖2V = z{t,t} = 2, so the choice of g = 1√N ∑N/2 t=1 ptpt can be seen to obey ‖g‖V = 1, and implies that G is the scaled identity matrix 2√N I ∈ C N/2×N/2. Then by standard properties of the SVD: min f∈SymL ‖f − g‖2V ≥ 1 2 min rank(F )≤L ‖F − 2√ N I‖2F (15) = 1 N/2 min rank(F )≤L ‖F − I‖2F (16) = 1 N/2 (N/2− L) (17) = 1− 2L N (18) 5 Proof Sketch of Main Result 5.1 Challenges for High-dimensional Set Elements We’d like to strengthen this separation in several ways: • Generalize to the D > 1 case, • Realize a separation where the symmetric width L must scale exponentially in N and D, showing that SymL is infeasible, • Show the hard function g can nevertheless be efficiently approximated in Sym2L for L polynomial in N and D First, in order to approximate via polynomials in the high-dimenionsal case, we will require the high-dimensional analogue of powersum polynomials: Definition 5.1. For a multi-index α ∈ ND, the normalized multisymmetric powersum polynomial is defined as: pα(X) = 1√ |α| ∑ n ∏ d xαddn . (19) So the plan is to find a high-dimensional analogue of Lemma 4.4 and Lemma 4.5, now using multisymmetric powersum polynomials, mimic the proof of the D = 1 case, and then additionally show the hard function g is efficiently computable in the pairwise symmetric architecture. Note that because the algebraic basis of multisymmetric powersum polynomials is of size L∗ = ( N+D N ) − 1, we can expect an exponential separation when we apply a similar rank argument.1 1We subtract one in order to discount the constant polynomial. 5.2 Sketch of Main result (lower bound) Because we are in high dimensions, we cannot simply apply the restricted Hall inner product introduced in Theorem 4.3. To the best of our knowledge, there is no standard generalization of the Hall inner product to multi-symmetric polynomials that preserves the orthogonality property. For the main technical ingredient in the high-dimensional case we introduce a novel generalization, which builds on two inner products. First, we introduce a new input distribution ν over set inputs X ∈ CD×N , and induce an L2 inner product: 〈f, g〉A = EX∼ν [ f(X)g(X) ] . (20) We use this inner product to measure the approximation error of SymL. That is, we seek a lower bound to minf∈SymL ‖f − g‖A, for a suitable choice of hard function g. We can now apply an analogue of Lemma 4.4 to project f to a simplified form. But we cannot immediately apply an analogue of Lemma 4.5, as it relied on Parseval’s theorem and the low-degree multisymmetric powersum polynomials are not orthogonal in this inner product. Put another way, if we represent 〈·, ·〉A as a matrix in the basis of low-degree multisymmetric powersums, it will be positive-definite but include some off-diagonal terms. The idea is to now introduce a new inner product with a different input distribution ν0 〈f, g〉A0 = EX∼ν0 [ f(X)g(X) ] , (21) and define the bilinear form 〈f, g〉∗ = 〈f, g〉A − 2〈f, g〉A0 . (22) Typically positive-definiteness is lost when subtracting two inner products, but we prove that 〈·, ·〉∗ is an inner product when restricted to a particular subspace of symmetric polynomials (see Theorem D.3). Furthermore, the careful choice of ν and ν0 cancels the off-diagonal correlation of different multisymmetric powersums, so they are orthogonal under this new inner product 〈·, ·〉∗. By the norm domination ‖ · ‖A ≥ ‖ · ‖∗, we are able to pass from the former L2 norm to the latter norm that obeys orthogonality, and apply an analogue of the Rank Lemma 4.5. Thus we derive a lower bound using any hard function g whose corresponding matrix G (built from orthogonal coefficients) is diagonal and high-rank. And because the total number of polynomials is L∗, the rank argument now yields an exponential separation. Based on this proof, we have much freedom in our choice of g. By choosing its coefficients in the basis of multisymmetric powersum polynomials, it’s easy to enforce the conditions that G is diagonal and high-rank for variety of possible functions. However, ensuring that g is not pathological (i.e. that it is bounded and Lipschitz), and can be efficiently approximated in Sym2L, requires a more careful choice. 5.3 Sketch of Main Result (upper bound) It remains to approximate the hard function g with a network from Sym2L. First we must make a choice of g in particular. Based on the lower bound proof, the desiderata for g is that it is supported exclusively on terms of the form pαpα over many values of α, as this induces a diagonal and high-rank matrix G in an analogue of Lemma 4.5. Furthermore, by simple algebra one can confirm that pα(X)pα(X) = 1 |α| ∑ n,n′ ∏D d=1(xdnxdn′) αd , so g supported on these polynomials can clearly be written in the form of a network in Sym2L. This structure of g guarantees difficult approximation, and is akin to the radial structure of the hard functions introduced in works on depth separation [7]. We must however be careful in our choice of g: for the matrix G to be high-rank, g must be supported on exponentially many powersum polynomials. But this could make ‖g‖∞ exponentially large, and therefore challenging to approximate efficiently with a network from Sym2L. We handle this difficulty by defining g in a different way. We introduce a finite Blaschke product µ(ξ) = ξ−1/4ξ/4−1 , a function that analytically maps the unit complex circle to itself. Then the choice g(X) = N∑ n,n′=1 D∏ d=1 µ(xdnxdn′) (23) ensures that ‖g‖∞, ‖g‖A, and Lip(g) are all polynomial in N,D, 1 for approximation error (see Lemma E.3). Furthermore, again from simple algebra it is clear that g is only supported on terms of the form pαpα. So it remains to show that the induced diagonal matrix G is effectively high rank, which follows from expanding the Blaschke products. Satisfied that this choice of g will meet the desiderata for the lower bound, and has no pathological behavior, it remains to construct f ∈ Sym2L for L = 1 that approximates g. That is, choose ψ1 and ρ so that g(X) ≈ ρ (∑N n,n′=1 ψ1(xn, xn′) ) . Clearly we may take ρ to be the identity, and ψ1(xn, xn′) to approximate ∏D d=1 µ(xdnxdn′), which is straightforwardly calculated in depth O(logD) by performing successive multiplications in a binary-tree like structure (see Theorem F.1). Ultimately, we use a slight variant of this function for the formal proof. Because the orthogonality of our newly introduced inner product 〈·, ·〉∗ only holds for low-degree polynomials, we must truncate high-degree terms of g; we confirm in Appendix F that this truncation nevertheless preserves the properties we care about. 6 Discussion In this work, we’ve demonstrated how symmetric width captures more of the expressive power of symmetric networks than depth when restricted to analytic activations, by evincing an exponential separation between two of the most common architectures that enforce permutation invariance. The most unusual property of this result is the complete independence of depth, owing to the unique orthogonality properties of the restricted Hall inner product when paired with the assumption of analyticity. This stands in contrast to the case of vanilla neural networks, for which separations beyond small depth would resolve open questions in circuit complexity suspected to be quite hard [25]. Furthermore, the greater dependence on width than depth is a unique property to symmetric networks, whereas the opposite is true for vanilla networks [26]. A natural extension would be to consider the simple equivariant layers introduced in Zaheer et al. [32], which we suspect will not substantially improve approximation power of SymL. Furthermore, allowing for multiple such equivariant layers, this network becomes exactly akin to a Graph Convolutional Network [10] on a complete graph, whereas Sym2L corresponds to a message passing network [9] as it is capable of interpreting edge features. 6.1 Limitations The major limitation of this result is the restriction to analytic functions. Although analytic symmetric functions nevertheless appear crucially in the study of exactly solvable quantum systems [2, 11], this assumption may be be overly strict for general problems of learning symmetric functions. We nevertheless conjecture that these bounds will still hold even allowing for non-analytic activations, and consider this an exciting question for future work. Additionally, whether the hard function g can be efficiently learned with gradient descent remains unclear, and future work could touch on the learnability. Acknowledgements: This work has been partially supported by the Alfred P. Sloan Foundation, NSF RI-1816753, NSF CAREER CIF-1845360, and NSF CCF-1814524.
1. What is the focus and contribution of the paper regarding symmetric functions and neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and clarity? 3. What are the weaknesses of the paper, especially regarding its practical context and motivation? 4. Do you have any concerns about the lower bound on the symmetric width for Deep Sets? 5. Are there any limitations regarding the ability of Relational Networks to model symmetric functions?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper provides a novel result on the relative expressiveness of two popular architectures for symmetric functions, Deep Sets (pooling over individual inputs) and Relational Networks (pooling over pairs of inputs). Strengths And Weaknesses This is a very well-written paper. The narrative is easy to follow and the key ideas are explained clearly. The key result is novel and interesting, and motivation is provided in the text. This paper will certainly be of significant interest to researchers working with symmetric neural networks. Most of the paper is devoted to setting up mathematical context or describing the mathematics of proving the main result. The mathematical content here is explained and structured very clearly, but perhaps a small amount of additional space could be given over to motivation and practical context for the result, which, although present, is currently a little lacking. Questions The main result in this paper shows that there exists a "hard" function g for which Deep Sets requires an exponentially greater symmetric width than Relational Networks. Is your lower bound on the symmetric width for Deep Sets the worst possible, i.e. does the symmetric width required by your function g suffice for Deep Sets to model any symmetric function? Is there a known bound on the required symmetric width for Relational Networks to model any symmetric function? Limitations I see no potential for negative societal impact. The authors briefly note a major limitation of their result. I think that this brief note is adequate, since it correctly identifies the most important possible strengthening of the result and explicitly leaves it open for future work.
NIPS
Title Syndicated Bandits: A Framework for Auto Tuning Hyper-parameters in Contextual Bandit Algorithms Abstract The stochastic contextual bandit problem, which models the trade-off between exploration and exploitation, has many real applications, including recommender systems, online advertising and clinical trials. As many other machine learning algorithms, contextual bandit algorithms often have one or more hyper-parameters. As an example, in most optimal stochastic contextual bandit algorithms, there is an unknown exploration parameter which controls the trade-off between exploration and exploitation. A proper choice of the hyper-parameters is essential for contextual bandit algorithms to perform well. However, it is infeasible to use offline tuning methods to select hyper-parameters in contextual bandit environment since there is no pre-collected dataset and the decisions have to be made in real time. To tackle this problem, we first propose a two-layer bandit structure for auto tuning the exploration parameter and further generalize it to the Syndicated Bandits framework which can learn multiple hyper-parameters dynamically in contextual bandit environment. We derive the regret bounds of our proposed Syndicated Bandits framework and show it can avoid its regret dependent exponentially in the number of hyper-parameters to be tuned. Moreover, it achieves optimal regret bounds under certain scenarios. Syndicated Bandits framework is general enough to handle the tuning tasks in many popular contextual bandit algorithms, such as LinUCB, LinTS, UCB-GLM, etc. Experiments on both synthetic and real datasets validate the effectiveness of our proposed framework. 1 Introduction The stochastic contextual bandit problem models the well-known exploration-exploitation dilemma in a repeated game between a player and an environment. At each round, the player sequentially interacts with the environment by pulling an arm from a pool of K arms, where every arm is associated with a ∗Work done prior to joining Amazon. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). d-dimensional contextual feature vector. Only the stochastic reward corresponding to the pulled arm is revealed to the player. The goal of the player is to maximize the cumulative reward or minimize the cumulative regret. Due to the partial feedback setting, the player has to balance between exploitation — pulling the arm that has the best estimated reward so far — and exploration — exploring whether there are uncertain arms that may be better than the current estimated best arm. With substantial applications in recommender systems [17], online advertising [20], clinical trials [25], etc., bandit algorithms have been extensively studied during the past few decades. In general, there are two exploration techniques, Upper Confidence Bound (UCB) [6, 17, 18] and Thompson Sampling (TS) [4, 5] algorithms. The UCB algorithm addresses the dilemma optimistically by pulling the arm that has the biggest upper confidence bound. The TS algorithm usually assumes that the relationship between the contextual features and rewards follows a prior model and it uses new observations at each round to estimate the posterior model. The player then pulls the arm that has the best estimated reward based on the posterior model. In general, the contextual bandit problems have regret lower bounded by O( √ T ) [13, 17], where T is the total number of rounds. Both UCB and TS algorithms have been shown to achieve optimal regret bounds in (generalized) linear bandit problems [17, 18], kernelized bandit problems [12] and even in the contextual bandit problem with more complicated models such as neural networks [28]. Despite the popularity of the contextual bandit problem, there are some practical issues that prevent it from being used widely in practice. In both UCB and TS, there are hyper-parameters that are unknown to the player. One of the most important hyper-parameters is the exploration parameter, which controls the trade-off between exploration and exploitation. A good choice of the exploration parameter is essential for the algorithm to perform well and for the theory to hold. Another commonly seen hyper-parameter is the regularization parameter λ in ridge regression or generalized linear model, which is used to model the relationship between features and rewards in (generalized) linear bandits. In contextual bandit problems with complex models such as neural network, the recently proposed NeuralUCB [28] algorithm has far more than just two hyper-parameters. NeuralUCB also needs to select the network width, network depth, step size for gradient descent to solve the neural networks and gradient descent steps, etc. Due to the nature of the bandit environment, where the decisions have to be made in real time, it is inherently difficult to tune the hyper-parameters by the traditional offline tuning methods, such as cross validation, since when you have decided to use a parameter in partial datasets and make a decision based on this, the regret incurred by this decision will never be reversible in contextual bandit environment. In many prominent bandit works [17, 14, 28, 10], the experiments are conducted by running a grid search on the possible choices of parameters and only the best result is reported. Although the performance of the best grid search results is of academic interest, it is not possible to do grid search in practice. In some other works [15], exploration parameter is set as a sufficient, theoretically derived, and often unknown value, but this value may be too conservative and may not achieve good performances in practice, which can be seen from the experiments in Table 1. In this work, we first propose a two-layer bandit structure that can automatically tune the exploration parameter dynamically from the observed data. The two-layer bandit structure has its similarities to the bandit-over-bandit (BOB) algorithm [11] proposed for the non-stationary stochastic bandit problems, where it uses BOB idea to successfully adapt its sliding-window sizes by restarting the algorithms in epochs. Motivated by the two-layer bandit structure we propose in Section 4, we generalize it to the “Syndicated Bandits” framework where there could be multiple hyper-parameters to be tuned in the contextual bandit algorithm. We provide theoretical guarantees for our framework and show that our proposed auto tuning method in general has regret upper bound Õ(T 2/3) + Õ( ∑L l=1 √ nlT ). Here L is the total number of hyper-parameters to be tuned and nl is the number of candidates in the tuning set of the l-th hyper-parameter. When the unknown theoretical exploration parameter is no bigger than any elements in the tuning set, our proposed framework has optimal regret upper bound Õ( √ T ) + Õ( ∑L l=1 √ nlT ) for UCB-based algorithms. Our framework is general enough to handle tuning tasks in many contextual bandit algorithms as long as the arms to be pulled at round t follows a fixed distribution given the hyper-parameters to be used at this round and the past information. This includes many popular contextual bandit algorithms such as Linear UCB (LinUCB) [17, 1], Linear TS (LinTS) [5, 10], UCB-GLM [18], etc. Our proposed Syndicated Bandits framework is the first work that considers tuning multiple parameters dynamically from observations in the contextual bandit problems with theoretical guarantees. We provide a regret bound that avoids the exponential dependency on the total number of hyper-parameters to be tuned. This is one of the main contributions of our proposed work. In Section 6, we show by experiments that our proposed framework improves over existing works, as well as the bandit algorithms that use the unknown theoretically derived exploration parameter. 2 Related work There is a rich line of works on multi-armed bandit (MAB) and stochastic contextual bandit algorithms, including (generalized) linear bandits, kernelized bandits and neural bandits, etc. Most of them follow the UCB and TS exploration techniques. We refer the readers to [17, 18, 4, 5, 10, 12, 28] for the seminal works regarding the bandit problems. There are many previous works that utilize algorithms in the stochastic MAB [23] setting to solve the hyper-parameter optimization problem [21, 19]. There are also some online hyper-parameter tuning works such as [24], however, those mainly focuses on reducing the training cost for tuning parameters of neural networks online and they are not considering minimizing the cumulative regret in contextual bandit problems. In the following, we will only pay attention to related works on the tuning tasks in stochastic contextual bandits. [22] proposed a meta-learning method for learning exploration parameters in contextual bandit problems. It learns a good exploration strategy in synthetic datasets and applies it to the real contextual bandit problems by an imitation study. The meta-learning algorithm is compared with seven baseline contextual bandit algorithms and achieves good empirical results. We note that this algorithm cannot learn the exploration parameters adaptively from observations in the contextual bandit environment. In [9], the authors first proposed OPLINUCB and DOPLINUCB algorithms to learn exploration parameters dynamically. OPLINUCB treats the possible choices of hyper-parameters as arms and uses a standard MAB TS algorithm to choose parameters. It then uses the chosen parameter in the contextual bandit algorithm. However, this method does not have theoretical guarantee in general, since the MAB TS only works when the rewards of the candidate hyper-parameters in the tuning set stay stationary over time. For hyper-parameter selections in contextual bandit problems, the best exploration parameter does not stay the same all the time. This is because in later rounds, when the learning is sophisticated, less exploration is better. However, in the beginning, more exploration is preferred due to the uncertainty. This non-stationary nature in tuning hyper-parameters makes the performance of OPLINUCB unstable in practice. DOPLINUCB is a similar tuning method as OPLINUCB, except that it uses the CTree algorithm to select hyper-parameters at each round. It is shown in [9] that DOPLINUCB does not outperform OPLINUCB in stationary contetxual bandit environments, where the reward-feature model does not change over time. Another close line of literature is on model selections in bandit algorithms. [16] tackles the feature selection problem in bandit algorithms and achieve O(T 2/3d1/3∗ ) where d∗ is the total number of optimal features. [3] uses the corralling idea to create a master algorithm to choose the best bandit model from a set of M base models. Hyper-parameter tuning problem can be formulated as a model selection problem in [3], where we can treat bandit algorithms with different hyper-parameters as the base models. The theoretical regret bound of the corralling idea [3] is O( √ MT +MRmax), where M is the total number of base models and Rmax is the maximum regret of M base models if they were to run alone. This means that the regret bound will be exponentially dependent on the total number of hyper-parameters to be tuned. In addition, if there is one hyper-parameter in the tuning set that gives linear regret of the algorithm, then Rmax is linear in T which makes the corralling idea have linear regret in worst case. Our algorithm is also much more efficient than the corralling idea when M is big. The corralling idea requires updating all M base models/ algorithms at each round. However, our algorithm only needs to update the selected model/ bandit algorithm with selected hyper-parameter at each round. When the time complexity of updating the model/ algorithm is big, the corralling idea is expensive. For example, if we tune configurations for UCB-GLM, the corralling idea needs O(MT 2d) time, while the time complexity of our algorithm is only O(MT + T 2d). We address here that none of the previous works can tune multiple parameters dynamically from observations. Although OPLINUCB [9] and the corralling idea [3] can treat all the hyper-parameters as a single parameter and set the tuning set as all the possible combinations of hyper-parameters. This will lead to exponential number of configurations which may not be efficient in both computation and theoretical regret bounds. Our proposed Syndicated framework avoids the exponential regret bound. Notations: For a vector x ∈ Rd, we use ∥x∥ to denote its l2 norm and ∥x∥A := √ xTAx for a positive-definite matrix A ∈ Rd×d. Finally, we denote [n] := {1, 2, . . . , n}. 3 Preliminaries We study the hyper-parameter selection tasks in a stochastic contextual bandit problem with K arms, where K can be an infinite number. Assume there are in total T rounds, at each round t ∈ [T ], the player is given K arms, represented by a set of feature vectors At = {xt,a|a ∈ [K]} ⊂ Rd, At is drawn IID from an unknown distribution with ∥xt,a∥ ≤ 1 for all t ∈ [T ] and a ∈ [K], where xt,a is a d-dimensional feature vector that contains the information of arm a at round t. The player makes a decision by pulling an arm at ∈ [K] based on At and past observations. We make a common regularity assumption as in [14, 18], i.e. there exists a constant σ0 > 0 such that λmin ( E[ 1k ∑k a=1 xt,ax ⊤ t,a] ) > σ0. The player can only observe the rewards of the pulled arms. Denote Xt := xt,at as the feature vector of the pulled arm at round t and Yt the corresponding reward. We assume the expected rewards and features follow a model E[Yt|Xt] = µ(XTt θ∗), where µ(·) is a known model function and θ∗ is the true but unknown model parameter. When µ(x) = x, this becomes the well-studied linear bandits problem. When µ(·) is a generalized linear model or a neural network, this becomes the generalized linear bandits (GLB) and neural bandits respectively. Without loss of generality, we assume that there exists a positive constant S such that ∥θ∗∥ ≤ S. We also assume the mean rewards µ(xTt,aθ ∗) ∈ [0, 1] and observed rewards Yt ∈ [0, 1]. This is a noncritical assumption, which can be easily relaxed to any bounded interval. IfFt = σ({as,As, Ys}ts=1∪ At+1) is the information up to round t, we assume the observed rewards follow a sub-Gaussian distribution with parameter σ2, i.e., Yt = µ(XTt θ ∗) + ϵt, where ϵt are independent random noises that satisfy E[ebϵt |Ft−1] ≤ b 2σ2 2 for all t and b ∈ R. Denote a ∗ t = argmaxa∈[K] µ(X T t θ ∗) as the optimal arm at round t and xt,∗ as its corresponding feature, the goal is to minimize the cumulative regret over T rounds defined in the following equation. R(T ) = T∑ t=1 [ µ(xTt,∗θ ∗)− µ(XTt θ∗) ] . (1) For linear bandits where µ(x) = x, classic bandit algorithms such as LinUCB [1, 17] and LinTS [2] compute an estimate of the model parameter θ̂t using ridge regression with regularization parameter λ > 0, i.e., θ̂t = V −1t ∑t−1 s=1 XsYs, where Vt = λId + ∑t−1 s=1 XsX T s . Shown by [1], with probability at least 1− δ, the true model parameter θ∗ is contained in the following confidence set, Ct = { θ ∈ Rd : ∥θ − θ̂t∥Vt ≤ α(t) } , (2) where α(t) = σ √ d log ( 1 + t/λ δ ) + S √ λ. (3) To balance the trade-off between exploration and exploitation, there are in general two techniques. For example, in linear bandits, LinUCB explores optimistically by pulling the arm with the maximum upper confidence bound, while LinTS adds randomization by drawing a sample model from the posterior distribution and pulls an arm based on it. at = argmax a xTt,aθ̂t + α(t)∥xt,a∥V −1t , (LinUCB) θTSt ∼ N(θ̂t, α(t)2V −1t ) and at = argmax a xTt,aθ TS t . (LinTS) In the following, we call α(t) the exploration parameter. As suggested by the theories in [1, 17], a conservative choice of the exploration parameter is to follow Equation 3. However, in Equation 3, the upper bound of the l2 norm of the model parameter S and the sub-Gaussian parameter σ are unknown to the player, which makes it difficult to track theoretical choices of the exploration parameter. In Table 1, we show the cumulative regret of LinUCB [1, 17] and LinTS [5] in a simulation study with d = 5, T = 10000 and K = 100. Rewards are simulated from N(xTt,aθ ∗, 0.5). The model parameter θ∗ and feature vectors xt,a are drawn from Uniform(− 1√d , 1√ d ). Two scenarios are considered in this table. In the first scenario, the feature of each arm keeps the same over T rounds. While in the second scenario, the features are re-simulated from Uniform(− 1√ d , 1√ d ) at different rounds. We run a grid search of the exploration parameter in {0, 0.5, 1, . . . , 10} and report the best grid search result, as well as the results using the theoretical exploration parameter given by Equation 3 (last column in Table 1). As we shall see in Table 1, the best exploration parameter is not the same for different scenarios. Therefore, which exploration parameter to use is an instance-dependent problem and the best exploration parameter should always be chosen dynamically based on the observations. Meanwhile, theoretical exploration parameters do not always give the best performances from Table 1. On the other hand, in many other works where the model of contextual bandit problem is more complex, such as the generalized linear bandit [14], neural bandit [28], there may be many more hyper-parameters than just α(t). 4 A two-layer bandit structure for tuning exploration parameters In the previous section, we have discussed that the best hyper-parameters should be instant-dependent. In this section, we propose a two-layer bandit structure to automatically learn the best hyper-parameter from data at each round. We will take learning the best exploration parameter as an example. However, we want to emphasize that this structure can also be applied to learn other single hyper-parameter. We randomly select arms for the first T1 rounds to warm up the algorithm. For all rounds later, in this two-layer bandit structure, the top layer follows an adversarial MAB policy, namely, the EXP3 algorithm [7]. Assume J is the tuning set of all the possible exploration parameters. At each round t > T1, the top layer will select a candidate exploration parameter αit ∈ J , where αi is the i-th element in the set J and it is the selected index at round t. The bottom layer runs the contextual bandit algorithm based on the selected exploration parameter αit . Details are listed in Algorithm 1. 4.1 Regret analysis Given all the past information Ft−1, denote at(αj |Ft−1) as the pulled arm when the exploration parameter is αj at round t. Denote Xt(αj |Ft−1) = xt,at(αj |Ft−1) as the corresponding feature vector under Ft−1. Note that in our algorithm, Xt := Xt(αit |Ft−1) when t > T1. To analyze the cumulative regret, we first decompose the regret defined in Equation 1 into three parts: E[R(T )] = E [ T∑ t=1 ( µ ( xt,∗ T θ ) − µ ( XTt θ ))] = E [ T∑ t=T1+1 ( µ ( xt,∗ T θ ) − µ ( Xt(α ∗|Ft−1)T θ ))] ︸ ︷︷ ︸ Quantity (A) +E [ T∑ t=T1+1 ( µ ( Xt(α ∗|Ft−1)T θ ) − µ ( Xt(αit |Ft−1)T θ ))] ︸ ︷︷ ︸ Quantity (B) +E [ T1∑ t=1 ( µ ( xt,∗ T θ ) − µ ( XTt θ ))] ︸ ︷︷ ︸ Quantity (C) , where µ(·) is the reward-feature model function and α∗ ∈ J is some arbitrary candidate exploration parameter in J . Quantity (A) is the regret of the contextual bandit algorithm that runs with the same hyper-parameter α∗ under the past history Ft−1 generated from our tuning strategy every round. Quantity (B) is the extra regret paid to tune the hyper-parameter. Quantity (C) is the regret paid for random exploration in warm-up phases and is controlled by the scale of O(T1). We show by Lemma 1 and Theorem 1 below that our auto tuning method in Algorithm 1 does not cost too much in selecting parameters in most scenarios under mild conditions. Algorithm 1 A Two-layer Auto Tuning Algorithm Input: time horizon T , warm-up length T1, candidate hyper-parameter set J = {αi}ni=1. 1: Randomly choose at ∈ [K] and record Xt, Yt for t ∈ [T1]. 2: Initialize exponential weights wj(T1 + 1) = 1 for j = 1, . . . , n. 3: Initialize the exploration parameter for EXP3 as β = min { 1, √ n logn (e−1)T } . 4: for t = (T1 + 1) to T do 5: Update probability distribution for pulling candidates in J pj(t) = β n + (1− β) wj(t)∑n i=1 wi(t) 6: it ← j ∈ [n] with probability pj(t). 7: Run the contextual bandit algorithm with hyper-parameter α(t) = αit to pull an arm. For example, pull arms according to the following equations at = argmax a=1,...,K xTt,aθ̂t + αit∥xt,a∥V −1t (LinUCB) θTSt ∼ N(θ̂t, α2itV −1 t ) and at = argmax a xTt,aθ TS t . (LinTS) 8: Observe reward Yt and update the components in the contextual bandit algorithm. 9: Update EXP3 components: ŷt(j)← 0 if j ̸= it, ŷt(j)← Yt/pj(t) if j = it, and wj(t+ 1) = wj(t)× exp ( β n ŷt(j) ) . 10: end for Since the arms pulled by the contextual bandit layer also affect the update of the EXP3 layer in Algorithm 1, the result of EXP3 algorithm is not directly applicable to bounding Quantity (B). We modify the proof techniques in [7] and present the proof details in Appendix. Lemma 1. Assume given the past information Ft−1 and the hyper-parameter to be used by the contextual bandit algorithm at round t, the arm to be pulled follows a fixed distribution. For a random sequence of hyper-parameters {αi1 , . . . , αiT } selected by the EXP3 layer in Algorithm 1, and arm at(αit) is pulled in the contextual bandit layer at round t, we have max α∈J E [ T∑ t=1 µ ( Xt(α|Ft−1)T θ )] −E [ T∑ t=1 µ ( Xt(αit |Ft−1)T θ )] ≤ 2 √ (e− 1)nT log n, where J = {α1, . . . , αn} is the tuning set of the hyper-parameter and |J | = n. To bound Quantity (A), we note that we are not able to use any existing regret bound in the literature directly since the past information Ft−1 here is based on the sequence of arms pulled by our autotuning algorithm instead of the arms generated by using α∗ at each round, and the history would affect the update of bandit algorithms. We overcome this challenge by noticing that the consistency of θ̂t plays a vital role in most of the proofs for (generalized) linear bandits, and this consistency could hold after a warm-up period or with large exploration rate. Therefore, we can expect a tight bound of the cumulative regret by using the same exploration parameter even under another line of observations Ft−1 with sufficient exploration. Another crux of proof is that the regret is usually related to ∥xt∥V −1t , which can be similarly bounded after sufficient exploration. After we bound Quantity (A), combing Lemma 1, we get the following theorem. Theorem 1. Assume given the past information Ft−1 generated from our proposed algorithm for arm selection and the hyper-parameter to be used by the contextual bandits, the arm to be pulled follows a fixed distribution. For UCB and TS based generalized linear bandit algorithms with exploration hyper-parameters (LinUCB, UCB-GLM, LinTS, ect.), the regret bound of Algorithm 1 satisfies (1) E[R(T )] = Õ(T 2/3) +O( √ n(T − T1) log n) given the warm-up length T1 = Õ(T 2/3). (2) For UCB-based bandits, if the theoretical exploration parameter α(T ) is no larger than any element in J , then it holds that E[R(T )] = Õ( √ T ) +O( √ nT log n) with T1 = 0. (3) IfAt is a convex set, and the smallest principal curvature in any neighborhood of the optimal vector xt,∗ ∈ At on At can be lower bounded by some positive constant c, then E[R(T )] = Õ(T 4/7) +O( √ n(T − T1) log n) after a warming-up period of length T1 = O(T 4/7). Remark 1. We could expect a similar result for TS-based bandit algorithms as in Theorem 1 (2), and we offer an intuitive explanation in Appendix 4. Moreover, the conditions in Proposition 1 (3) could be easily verified in many cases. For example, it holds when At = {x ∈ Rd : ∥x∥ ≤ a},∀a > 0. 5 The Syndicated Bandits framework for selecting multiple hyper-parameters There can be multiple hyper-parameters in the contextual bandit algorithm. For example, in linear bandit algorithms such as LinUCB[1, 17] and LinTS [5], exploration parameter α and the regularization parameter of the ridge regression λ are both hyper-parameters to be tuned. In more recent contextual bandit works, there could be even more than two hyper-parameters. For example, NeuralUCB algorithm [28], which is proposed for the contextual bandit problems with a deep neural network model, has many tuning parameters such as the network width, network depth, step size for gradient descent, number of steps for gradient descent, as well as exploration parameter and regularization parameter λ, etc. Another example can be found in [14], where an efficient SGD-TS algorithm is proposed for generalized linear bandits, the number of tuning parameters is also more than two. A naive strategy to auto-tune multiple hyper-parameters is to use Algorithm 1 and let the tuning set J contain all the possible combinations of the hyperparameters. Assume there are in total L hyper-parameters α(1), α(2), . . . , α(L). For all l ∈ [L], if the tuning set for α(l) is defined as Jl = {α(l)1 , . . . , α (l) nl }, where nl is the size of the corresponding tuning set. Then there are in total ΠLl=1nl possible combinations. Based on Lemma 1, the extra regret paid to tune the hyper-parameters (Quantity (B)) is upper bounded by Õ( √ ΠLl=1nlT ). Therefore, the aforementioned naive approach makes the regret increase exponentially with the number of tuning parameters. To mitigate this issue, we propose the Syndicated Bandits framework that can deal with multiple hyper-parameters while avoiding the exponential dependency on the number of tuning parameters in regret bounds. We create L + 1 bandit instances in this framework. In the bottom layer, the contextual bandit algorithm is used to decide which arm to pull. On top of the contextual bandit layer, there are L EXP3 bandits, denoted as EXP3(l) for l ∈ [L]. Each EXP3 algorithm is responsible for tuning one hyper-parameter only. At round t, if it(l) is the index of the hyper-parameters in Jl selected by the EXP3(l) bandit and the selected hyper-parameter is denoted as α(l)it(l) for l ∈ [L], then the contextual bandit algorithm in the bottom layer will use these parameters to make a decision and receive a reward based on the pulled arm. The reward is fed to all the L+ 1 bandits to update the components. Illustration of the algorithm and more details are presented in Figure 1 and Algorithm 2 in Appendix. 5.1 Regret analysis At round t, given all the past information Ft−1, denote at(α(1)j1 , . . . , α (L) jL |Ft−1) as the arm pulled by the contextual bandit algorithm if the parameters are chosen as α(l) = α(l)jl for all l ∈ [L] and let Xt(α (1) j1 , . . . , α (L) jL |Ft−1) be the corresponding feature vector. Recall that µ(·) is the reward-feature model function, then for an arbitrary combination of hyper-parameters (α(1)∗ , . . . , α (L) ∗ ), E[R(T )] = T1∑ t=1 E [ µ ( xTt,∗θ ) − µ ( XTt θ )] + T∑ t=T1+1 E [ µ ( xTt,∗θ ) − µ ( Xt(α (1) ∗ , . . . , α (L) ∗ |Ft−1)T θ )] + T∑ t=T1 E [ µ ( Xt(α (1) ∗ , . . . , α (L) ∗ |Ft−1)T θ ) − µ ( Xt(α (1) it(1) , α (2) ∗ , . . . , α (L) ∗ |Ft−1)T θ )] + T∑ t=T1 E [ µ ( Xt(α (1) it(1) , α (2) ∗ , . . . , α (L) ∗ |Ft−1)T θ ) − µ ( Xt(α (1) it(1) , α (2) it(2) , α (3) ∗ . . . |Ft−1)T θ )] + · · ·+ T∑ t=T1 E [ µ ( Xt(α (1) it(1) , . . . , α (L−1) it(L−1), α (L) ∗ |Ft−1)T θ ) − µ ( Xt(α (1) it(1) , . . . , α (L) it(L) |Ft−1)T θ )] . The first quantity represents the regret from pure exploration. The second quantity in the above decomposition is the regret of the contextual bandit algorithm that runs with the same hyper-parameters α (1) ∗ , . . . , α (L) ∗ under the past history Ft−1 generated from our tuning strategy every round. The next L quantities in the decomposition are the regret from tuning parameters in the EXP3 layer, which can be bounded using similar techniques in Lemma 1. However, the correlations between parameters are more complicated in the analysis now. Formally, we provide the following Theorem to guarantee the performance of the Syndicated Bandits framework. Proofs are deferred to the Appendix. Theorem 2. Assume given the past information Ft−1 and the hyper-parameters to be used by the contextual bandit algorithm at round t, the arm to be pulled by the contextual bandit algorithm follows a fixed distribution. Then the auto tuning method in Algorithm 2 with warm-up length T1 = O(T 2/3) has the following regret bound in general: E[R(T )] ≤ Õ(T 2/3) +O ( L∑ l=1 √ nl(T − T1) log nl ) . Remark 2. Note this result avoids the exponential dependency on the number of hyper-parameters to be tuned in regret. When the hyper-parameters to be tuned are the exploration parameter α and the regularization parameter λ of the (generalized) linear model, we also have the same conclusions as in Theorem 1 (3). Please refer to Appendix A.3 for a formal statement and its proof. Remark 3. Without any assumptions, Algorithm 2 has its regret dependent on d as O(d3 + dT 2/3) for both UCB and TS. In practice, usually d << T . 6 Experimental results We show by experiments that our proposed methods outperform various contextual bandit algorithm using the theoretical exploration parameter, as well as existing tuning methods. We compare different hyper-parameter selection methods in three popular contextual bandit algorithms, LinUCB [1, 17], LinTS [5] and UCB-GLM [18] with a logistic model. In practice, we set the warm-up length as T1 = 0 and tune both exploration parameters and regularization parameters. We compare the following hyper-parameter selection methods. Theoretical-Explore [1]: At round t, this method uses the true theoretical exploration parameter α(t) defined in Equation 3; OP [9]: We make simple modifications of OPLINUCB to make it applicable to tune exploration parameters for LinUCB, LinTS and UCB-GLM; Corral [3]: This method uses the corralling idea to tune the exploration parameter only. Corral-Combined [3]: This method treats bandits with different combinations of the exploration parameter and regularization parameter λ as base model and uses the corralling idea to tune the configurations; TL (Our work, Algorithm 1): This is our proposed Algorithm 1, where we use the two-layer bandit structure to tune the exploration parameter only; TL-Combined (Our work, Algorithm 1): This method tunes both the exploration parameter α and the regularization parameter λ using Algorithm 1, but with the tuning set containing all the possible combinations of α and λ; Syndicated (Our work, Algorithm 2): This method keeps two separate tuning sets for α and λ respectively. It uses the Syndicated Bandits in Algorithm 2. We set the tuning set for exploration parameter α as {0, 0.01, 0.1, 1, 10} and set the tuning set for regularization parameter λ as {0.01, 0.1, 1} in TL-Combined, Corral-Combined and Syndicated. Algorithm 2 The Syndicated Bandits Framework for Auto Tuning Multiple Hyper-parameters Input: time horizon T , warm up length T1, candidate hyper-parameter set {Jl}Ll=1. 1: Randomly choose at ∈ [K] and record Xt, Yt for t ∈ [T1]. 2: Initialize exponential weights w(l)j (T1 + 1) = 1 for t = 1, j = 1, . . . , nl and l = 1, . . . , L. 3: Initialize the parameters for all EXP3 layers as βl = min { 1, √ nl lognl (e−1)T } . 4: for t = (T1 + 1) to T do 5: Update probability distribution for pulling candidates in Jl p (l) j (t) = βl nl + (1− βl) w (l) j (t)∑nl i=1 w (l) i (t) 6: it(l)← j ∈ [nl] with probability p(l)j (t) for all l = 1, . . . , L. 7: Run the contextual bandit algorithm with hyper-parameters α(l) = α(l)it(l) to pull an arm. 8: Observe reward Yt and update the components in contextual bandit algorithms. 9: Update all L EXP3 bandits: ŷ(l)t (j)← 0 if j ̸= it(l). Otherwise, ŷ (l) t (j)← Yt/p (l) j (t). 10: For all l = 1, . . . , L, let w(l)j (t+ 1) = w (l) j (t)× exp ( βl nl ŷ (l) t (j) ) . 11: end for For Theoretical-Explore, OP and TL, since they only tune the exploration parameter, we set the regularization parameter as λ = 1. In all the experiments below, the total number of rounds is T = 10, 000. We run the comparisons on both simulations and the benchmark Movielens 100K real datasets. Due to limited space, the descriptions of the dataset settings are deferred to Appendix A.4. Averaged results over 10 independently repeated experiments are reported below. From Figure 2, we observe: 1) When tuning only one hyper-parameter (exploration parameter in our experiments), the proposed method outperforms previous tuning methods. Further, the theoretical exploration parameter does not perform well and it tends to be too conservative in practice, which is consistent with the results we show in Table 1. 2) When tuning multiple hyper-parameters, previous methods do not apply. We found using the Syndicated Bandits framework usually outperforms TL-Combined and is significantly better than Corral-Combined method which has exponential regret with respect to number of tuning parameters. 3) Using Syndicated Bandits to tune multiple hyperparameters usually outperforms tuning one parameter only. This demonstrates a practical need of auto-tuning multiple hyper-parameters in bandit algorithms. See Appendix for additional experiments on the tuning 3 hyper-parameters in SGD-TS [14]. 7 Conclusion In this paper, we propose a two-layer bandit structure for auto tuning the exploration parameter in contextual bandit algorithms, where the offline tuning is impossible. To further accommodate multiple hyper-parameters tuning tasks in contextual bandit algorithms with complicated models, we generalize our method to the Syndicated Bandits framework. This is the first framework that can auto-tune multiple hyper-parameters dynamically from observations in contextual bandit environment with theoretical regrets that avoids exponential dependency on the total number of hyper-parameters to be tuned. We show that our proposed algorithm can obtain Õ(T 2/3) regret in general and has optimal Õ( √ T ) regret for UCB-based algorithms when the all candidates in the tuning set is greater than the theoretical exploration parameter. Our work is general enough to handle the tuning tasks in many contextual bandit algorithms. Experimental results also validate the effectiveness of our proposed work. Acknowledgments and Disclosure of Funding We are grateful for the insightful comments from the anonymous reviewers and area chair. This work was partially supported by the National Science Foundation under grants CCF-1934568, DMS-1811405, DMS-1811661, DMS-1916125, DMS-2113605, DMS-2210388, IIS-2008173 and IIS-2048280. CJH is also supported by Samsung, Google, Sony and the Okawa Foundation.
1. What is the focus and contribution of the paper regarding hyperparameter tuning for contextual bandits? 2. What are the strengths of the proposed approach, particularly in its intuitive idea and regret guarantee? 3. What are the weaknesses of the paper, especially regarding its limited scope and suboptimal regret bound? 4. How does the reviewer assess the clarity and presentation of the proof, particularly when discussing multiple algorithms? 5. Are there any further questions or suggestions from the reviewer regarding the paper's content or presentation?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studied hyperparameter tuning for contextual bandits. The authors proposed a two-layer bandit structure to tune the exploration parameter where the top layer applies EXP3 to select a hyperparameter as adversarial MAB and the bottom layer is the contextual bandits to be tuned. The authors then generalized it to Syndicated Bandits to tune multiple hyperparameters. The proposed method achieve O ( T 2 / 3 + ∑ l n l T ) regret for tuning UCB and TS based algorithms. Empirical evaluations show that proposed method outperforms baselines. Strengths And Weaknesses Strength: Hyperparameter tuning for contextual bandits is an important and interesting problem. The idea of using adversarial bandits to select best hyperparameter is simple yet very intuitive. The proposed method has a non-trivial (although may not be optimal) regret guarantee and could be also of interest to practice. The paper is also well written and the main paper is easy to follow. Weakness: I have two concerns regarding the paper: The regret is only analyzed for UCB and TS based generalized linear bandits and is not general. The proof technique heavily relies on analyzing confidence ellipsoid of generalized linear bandits. Is it possible to extend the two-layer method for tuning other specific algorithms such as elimination based algorithm or adversarial bandits such as EXP3 or even general bandit algorithm, just like other model selection baselines? The O ( T 2 / 3 ) regret seems not optimal. The key challenge and novelty in the proof is bounding Quantity (A) where the history depends on the sequences of pulled hyperparameters instead of contextual bandits using a consistent hyperparameter. However, corralling idea can achieve O ( T ) regret as discussed in related work and is strictly better if only tuning one hyperparameter. Does the author suspect this O ( T 2 / 3 ) is tight and inevitable in general case, if not allowing exponential dependency on the number of hyperparameters? A minor suggestion is the proof is a bit hard to follow as it mixed UCB and TS and I would suggest the authors make a better presentation. ---------------After rebuttal--------------- Most of my questions are answered and I increased my score accordingly. I suggest the authors clearly discuss the regret dependency on dimension d in the final version. Questions Besides the questions in weakness, I have two clarification questions: Is expect regret upper bound for tuning TS based algorithm defined as Bayesian regret or frequentist regret? I tried to follow the proof of Theorem 1 and it looks like frequentist regret to me, but the proof mixed UCB and TS and is not very clear. What is the dependency of regret on feature dimension d ? Do they the same as bottom layer algorithm, e.g., linear for UCB and d 1.5 for TS? Limitations See weakness.
NIPS
Title Syndicated Bandits: A Framework for Auto Tuning Hyper-parameters in Contextual Bandit Algorithms Abstract The stochastic contextual bandit problem, which models the trade-off between exploration and exploitation, has many real applications, including recommender systems, online advertising and clinical trials. As many other machine learning algorithms, contextual bandit algorithms often have one or more hyper-parameters. As an example, in most optimal stochastic contextual bandit algorithms, there is an unknown exploration parameter which controls the trade-off between exploration and exploitation. A proper choice of the hyper-parameters is essential for contextual bandit algorithms to perform well. However, it is infeasible to use offline tuning methods to select hyper-parameters in contextual bandit environment since there is no pre-collected dataset and the decisions have to be made in real time. To tackle this problem, we first propose a two-layer bandit structure for auto tuning the exploration parameter and further generalize it to the Syndicated Bandits framework which can learn multiple hyper-parameters dynamically in contextual bandit environment. We derive the regret bounds of our proposed Syndicated Bandits framework and show it can avoid its regret dependent exponentially in the number of hyper-parameters to be tuned. Moreover, it achieves optimal regret bounds under certain scenarios. Syndicated Bandits framework is general enough to handle the tuning tasks in many popular contextual bandit algorithms, such as LinUCB, LinTS, UCB-GLM, etc. Experiments on both synthetic and real datasets validate the effectiveness of our proposed framework. 1 Introduction The stochastic contextual bandit problem models the well-known exploration-exploitation dilemma in a repeated game between a player and an environment. At each round, the player sequentially interacts with the environment by pulling an arm from a pool of K arms, where every arm is associated with a ∗Work done prior to joining Amazon. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). d-dimensional contextual feature vector. Only the stochastic reward corresponding to the pulled arm is revealed to the player. The goal of the player is to maximize the cumulative reward or minimize the cumulative regret. Due to the partial feedback setting, the player has to balance between exploitation — pulling the arm that has the best estimated reward so far — and exploration — exploring whether there are uncertain arms that may be better than the current estimated best arm. With substantial applications in recommender systems [17], online advertising [20], clinical trials [25], etc., bandit algorithms have been extensively studied during the past few decades. In general, there are two exploration techniques, Upper Confidence Bound (UCB) [6, 17, 18] and Thompson Sampling (TS) [4, 5] algorithms. The UCB algorithm addresses the dilemma optimistically by pulling the arm that has the biggest upper confidence bound. The TS algorithm usually assumes that the relationship between the contextual features and rewards follows a prior model and it uses new observations at each round to estimate the posterior model. The player then pulls the arm that has the best estimated reward based on the posterior model. In general, the contextual bandit problems have regret lower bounded by O( √ T ) [13, 17], where T is the total number of rounds. Both UCB and TS algorithms have been shown to achieve optimal regret bounds in (generalized) linear bandit problems [17, 18], kernelized bandit problems [12] and even in the contextual bandit problem with more complicated models such as neural networks [28]. Despite the popularity of the contextual bandit problem, there are some practical issues that prevent it from being used widely in practice. In both UCB and TS, there are hyper-parameters that are unknown to the player. One of the most important hyper-parameters is the exploration parameter, which controls the trade-off between exploration and exploitation. A good choice of the exploration parameter is essential for the algorithm to perform well and for the theory to hold. Another commonly seen hyper-parameter is the regularization parameter λ in ridge regression or generalized linear model, which is used to model the relationship between features and rewards in (generalized) linear bandits. In contextual bandit problems with complex models such as neural network, the recently proposed NeuralUCB [28] algorithm has far more than just two hyper-parameters. NeuralUCB also needs to select the network width, network depth, step size for gradient descent to solve the neural networks and gradient descent steps, etc. Due to the nature of the bandit environment, where the decisions have to be made in real time, it is inherently difficult to tune the hyper-parameters by the traditional offline tuning methods, such as cross validation, since when you have decided to use a parameter in partial datasets and make a decision based on this, the regret incurred by this decision will never be reversible in contextual bandit environment. In many prominent bandit works [17, 14, 28, 10], the experiments are conducted by running a grid search on the possible choices of parameters and only the best result is reported. Although the performance of the best grid search results is of academic interest, it is not possible to do grid search in practice. In some other works [15], exploration parameter is set as a sufficient, theoretically derived, and often unknown value, but this value may be too conservative and may not achieve good performances in practice, which can be seen from the experiments in Table 1. In this work, we first propose a two-layer bandit structure that can automatically tune the exploration parameter dynamically from the observed data. The two-layer bandit structure has its similarities to the bandit-over-bandit (BOB) algorithm [11] proposed for the non-stationary stochastic bandit problems, where it uses BOB idea to successfully adapt its sliding-window sizes by restarting the algorithms in epochs. Motivated by the two-layer bandit structure we propose in Section 4, we generalize it to the “Syndicated Bandits” framework where there could be multiple hyper-parameters to be tuned in the contextual bandit algorithm. We provide theoretical guarantees for our framework and show that our proposed auto tuning method in general has regret upper bound Õ(T 2/3) + Õ( ∑L l=1 √ nlT ). Here L is the total number of hyper-parameters to be tuned and nl is the number of candidates in the tuning set of the l-th hyper-parameter. When the unknown theoretical exploration parameter is no bigger than any elements in the tuning set, our proposed framework has optimal regret upper bound Õ( √ T ) + Õ( ∑L l=1 √ nlT ) for UCB-based algorithms. Our framework is general enough to handle tuning tasks in many contextual bandit algorithms as long as the arms to be pulled at round t follows a fixed distribution given the hyper-parameters to be used at this round and the past information. This includes many popular contextual bandit algorithms such as Linear UCB (LinUCB) [17, 1], Linear TS (LinTS) [5, 10], UCB-GLM [18], etc. Our proposed Syndicated Bandits framework is the first work that considers tuning multiple parameters dynamically from observations in the contextual bandit problems with theoretical guarantees. We provide a regret bound that avoids the exponential dependency on the total number of hyper-parameters to be tuned. This is one of the main contributions of our proposed work. In Section 6, we show by experiments that our proposed framework improves over existing works, as well as the bandit algorithms that use the unknown theoretically derived exploration parameter. 2 Related work There is a rich line of works on multi-armed bandit (MAB) and stochastic contextual bandit algorithms, including (generalized) linear bandits, kernelized bandits and neural bandits, etc. Most of them follow the UCB and TS exploration techniques. We refer the readers to [17, 18, 4, 5, 10, 12, 28] for the seminal works regarding the bandit problems. There are many previous works that utilize algorithms in the stochastic MAB [23] setting to solve the hyper-parameter optimization problem [21, 19]. There are also some online hyper-parameter tuning works such as [24], however, those mainly focuses on reducing the training cost for tuning parameters of neural networks online and they are not considering minimizing the cumulative regret in contextual bandit problems. In the following, we will only pay attention to related works on the tuning tasks in stochastic contextual bandits. [22] proposed a meta-learning method for learning exploration parameters in contextual bandit problems. It learns a good exploration strategy in synthetic datasets and applies it to the real contextual bandit problems by an imitation study. The meta-learning algorithm is compared with seven baseline contextual bandit algorithms and achieves good empirical results. We note that this algorithm cannot learn the exploration parameters adaptively from observations in the contextual bandit environment. In [9], the authors first proposed OPLINUCB and DOPLINUCB algorithms to learn exploration parameters dynamically. OPLINUCB treats the possible choices of hyper-parameters as arms and uses a standard MAB TS algorithm to choose parameters. It then uses the chosen parameter in the contextual bandit algorithm. However, this method does not have theoretical guarantee in general, since the MAB TS only works when the rewards of the candidate hyper-parameters in the tuning set stay stationary over time. For hyper-parameter selections in contextual bandit problems, the best exploration parameter does not stay the same all the time. This is because in later rounds, when the learning is sophisticated, less exploration is better. However, in the beginning, more exploration is preferred due to the uncertainty. This non-stationary nature in tuning hyper-parameters makes the performance of OPLINUCB unstable in practice. DOPLINUCB is a similar tuning method as OPLINUCB, except that it uses the CTree algorithm to select hyper-parameters at each round. It is shown in [9] that DOPLINUCB does not outperform OPLINUCB in stationary contetxual bandit environments, where the reward-feature model does not change over time. Another close line of literature is on model selections in bandit algorithms. [16] tackles the feature selection problem in bandit algorithms and achieve O(T 2/3d1/3∗ ) where d∗ is the total number of optimal features. [3] uses the corralling idea to create a master algorithm to choose the best bandit model from a set of M base models. Hyper-parameter tuning problem can be formulated as a model selection problem in [3], where we can treat bandit algorithms with different hyper-parameters as the base models. The theoretical regret bound of the corralling idea [3] is O( √ MT +MRmax), where M is the total number of base models and Rmax is the maximum regret of M base models if they were to run alone. This means that the regret bound will be exponentially dependent on the total number of hyper-parameters to be tuned. In addition, if there is one hyper-parameter in the tuning set that gives linear regret of the algorithm, then Rmax is linear in T which makes the corralling idea have linear regret in worst case. Our algorithm is also much more efficient than the corralling idea when M is big. The corralling idea requires updating all M base models/ algorithms at each round. However, our algorithm only needs to update the selected model/ bandit algorithm with selected hyper-parameter at each round. When the time complexity of updating the model/ algorithm is big, the corralling idea is expensive. For example, if we tune configurations for UCB-GLM, the corralling idea needs O(MT 2d) time, while the time complexity of our algorithm is only O(MT + T 2d). We address here that none of the previous works can tune multiple parameters dynamically from observations. Although OPLINUCB [9] and the corralling idea [3] can treat all the hyper-parameters as a single parameter and set the tuning set as all the possible combinations of hyper-parameters. This will lead to exponential number of configurations which may not be efficient in both computation and theoretical regret bounds. Our proposed Syndicated framework avoids the exponential regret bound. Notations: For a vector x ∈ Rd, we use ∥x∥ to denote its l2 norm and ∥x∥A := √ xTAx for a positive-definite matrix A ∈ Rd×d. Finally, we denote [n] := {1, 2, . . . , n}. 3 Preliminaries We study the hyper-parameter selection tasks in a stochastic contextual bandit problem with K arms, where K can be an infinite number. Assume there are in total T rounds, at each round t ∈ [T ], the player is given K arms, represented by a set of feature vectors At = {xt,a|a ∈ [K]} ⊂ Rd, At is drawn IID from an unknown distribution with ∥xt,a∥ ≤ 1 for all t ∈ [T ] and a ∈ [K], where xt,a is a d-dimensional feature vector that contains the information of arm a at round t. The player makes a decision by pulling an arm at ∈ [K] based on At and past observations. We make a common regularity assumption as in [14, 18], i.e. there exists a constant σ0 > 0 such that λmin ( E[ 1k ∑k a=1 xt,ax ⊤ t,a] ) > σ0. The player can only observe the rewards of the pulled arms. Denote Xt := xt,at as the feature vector of the pulled arm at round t and Yt the corresponding reward. We assume the expected rewards and features follow a model E[Yt|Xt] = µ(XTt θ∗), where µ(·) is a known model function and θ∗ is the true but unknown model parameter. When µ(x) = x, this becomes the well-studied linear bandits problem. When µ(·) is a generalized linear model or a neural network, this becomes the generalized linear bandits (GLB) and neural bandits respectively. Without loss of generality, we assume that there exists a positive constant S such that ∥θ∗∥ ≤ S. We also assume the mean rewards µ(xTt,aθ ∗) ∈ [0, 1] and observed rewards Yt ∈ [0, 1]. This is a noncritical assumption, which can be easily relaxed to any bounded interval. IfFt = σ({as,As, Ys}ts=1∪ At+1) is the information up to round t, we assume the observed rewards follow a sub-Gaussian distribution with parameter σ2, i.e., Yt = µ(XTt θ ∗) + ϵt, where ϵt are independent random noises that satisfy E[ebϵt |Ft−1] ≤ b 2σ2 2 for all t and b ∈ R. Denote a ∗ t = argmaxa∈[K] µ(X T t θ ∗) as the optimal arm at round t and xt,∗ as its corresponding feature, the goal is to minimize the cumulative regret over T rounds defined in the following equation. R(T ) = T∑ t=1 [ µ(xTt,∗θ ∗)− µ(XTt θ∗) ] . (1) For linear bandits where µ(x) = x, classic bandit algorithms such as LinUCB [1, 17] and LinTS [2] compute an estimate of the model parameter θ̂t using ridge regression with regularization parameter λ > 0, i.e., θ̂t = V −1t ∑t−1 s=1 XsYs, where Vt = λId + ∑t−1 s=1 XsX T s . Shown by [1], with probability at least 1− δ, the true model parameter θ∗ is contained in the following confidence set, Ct = { θ ∈ Rd : ∥θ − θ̂t∥Vt ≤ α(t) } , (2) where α(t) = σ √ d log ( 1 + t/λ δ ) + S √ λ. (3) To balance the trade-off between exploration and exploitation, there are in general two techniques. For example, in linear bandits, LinUCB explores optimistically by pulling the arm with the maximum upper confidence bound, while LinTS adds randomization by drawing a sample model from the posterior distribution and pulls an arm based on it. at = argmax a xTt,aθ̂t + α(t)∥xt,a∥V −1t , (LinUCB) θTSt ∼ N(θ̂t, α(t)2V −1t ) and at = argmax a xTt,aθ TS t . (LinTS) In the following, we call α(t) the exploration parameter. As suggested by the theories in [1, 17], a conservative choice of the exploration parameter is to follow Equation 3. However, in Equation 3, the upper bound of the l2 norm of the model parameter S and the sub-Gaussian parameter σ are unknown to the player, which makes it difficult to track theoretical choices of the exploration parameter. In Table 1, we show the cumulative regret of LinUCB [1, 17] and LinTS [5] in a simulation study with d = 5, T = 10000 and K = 100. Rewards are simulated from N(xTt,aθ ∗, 0.5). The model parameter θ∗ and feature vectors xt,a are drawn from Uniform(− 1√d , 1√ d ). Two scenarios are considered in this table. In the first scenario, the feature of each arm keeps the same over T rounds. While in the second scenario, the features are re-simulated from Uniform(− 1√ d , 1√ d ) at different rounds. We run a grid search of the exploration parameter in {0, 0.5, 1, . . . , 10} and report the best grid search result, as well as the results using the theoretical exploration parameter given by Equation 3 (last column in Table 1). As we shall see in Table 1, the best exploration parameter is not the same for different scenarios. Therefore, which exploration parameter to use is an instance-dependent problem and the best exploration parameter should always be chosen dynamically based on the observations. Meanwhile, theoretical exploration parameters do not always give the best performances from Table 1. On the other hand, in many other works where the model of contextual bandit problem is more complex, such as the generalized linear bandit [14], neural bandit [28], there may be many more hyper-parameters than just α(t). 4 A two-layer bandit structure for tuning exploration parameters In the previous section, we have discussed that the best hyper-parameters should be instant-dependent. In this section, we propose a two-layer bandit structure to automatically learn the best hyper-parameter from data at each round. We will take learning the best exploration parameter as an example. However, we want to emphasize that this structure can also be applied to learn other single hyper-parameter. We randomly select arms for the first T1 rounds to warm up the algorithm. For all rounds later, in this two-layer bandit structure, the top layer follows an adversarial MAB policy, namely, the EXP3 algorithm [7]. Assume J is the tuning set of all the possible exploration parameters. At each round t > T1, the top layer will select a candidate exploration parameter αit ∈ J , where αi is the i-th element in the set J and it is the selected index at round t. The bottom layer runs the contextual bandit algorithm based on the selected exploration parameter αit . Details are listed in Algorithm 1. 4.1 Regret analysis Given all the past information Ft−1, denote at(αj |Ft−1) as the pulled arm when the exploration parameter is αj at round t. Denote Xt(αj |Ft−1) = xt,at(αj |Ft−1) as the corresponding feature vector under Ft−1. Note that in our algorithm, Xt := Xt(αit |Ft−1) when t > T1. To analyze the cumulative regret, we first decompose the regret defined in Equation 1 into three parts: E[R(T )] = E [ T∑ t=1 ( µ ( xt,∗ T θ ) − µ ( XTt θ ))] = E [ T∑ t=T1+1 ( µ ( xt,∗ T θ ) − µ ( Xt(α ∗|Ft−1)T θ ))] ︸ ︷︷ ︸ Quantity (A) +E [ T∑ t=T1+1 ( µ ( Xt(α ∗|Ft−1)T θ ) − µ ( Xt(αit |Ft−1)T θ ))] ︸ ︷︷ ︸ Quantity (B) +E [ T1∑ t=1 ( µ ( xt,∗ T θ ) − µ ( XTt θ ))] ︸ ︷︷ ︸ Quantity (C) , where µ(·) is the reward-feature model function and α∗ ∈ J is some arbitrary candidate exploration parameter in J . Quantity (A) is the regret of the contextual bandit algorithm that runs with the same hyper-parameter α∗ under the past history Ft−1 generated from our tuning strategy every round. Quantity (B) is the extra regret paid to tune the hyper-parameter. Quantity (C) is the regret paid for random exploration in warm-up phases and is controlled by the scale of O(T1). We show by Lemma 1 and Theorem 1 below that our auto tuning method in Algorithm 1 does not cost too much in selecting parameters in most scenarios under mild conditions. Algorithm 1 A Two-layer Auto Tuning Algorithm Input: time horizon T , warm-up length T1, candidate hyper-parameter set J = {αi}ni=1. 1: Randomly choose at ∈ [K] and record Xt, Yt for t ∈ [T1]. 2: Initialize exponential weights wj(T1 + 1) = 1 for j = 1, . . . , n. 3: Initialize the exploration parameter for EXP3 as β = min { 1, √ n logn (e−1)T } . 4: for t = (T1 + 1) to T do 5: Update probability distribution for pulling candidates in J pj(t) = β n + (1− β) wj(t)∑n i=1 wi(t) 6: it ← j ∈ [n] with probability pj(t). 7: Run the contextual bandit algorithm with hyper-parameter α(t) = αit to pull an arm. For example, pull arms according to the following equations at = argmax a=1,...,K xTt,aθ̂t + αit∥xt,a∥V −1t (LinUCB) θTSt ∼ N(θ̂t, α2itV −1 t ) and at = argmax a xTt,aθ TS t . (LinTS) 8: Observe reward Yt and update the components in the contextual bandit algorithm. 9: Update EXP3 components: ŷt(j)← 0 if j ̸= it, ŷt(j)← Yt/pj(t) if j = it, and wj(t+ 1) = wj(t)× exp ( β n ŷt(j) ) . 10: end for Since the arms pulled by the contextual bandit layer also affect the update of the EXP3 layer in Algorithm 1, the result of EXP3 algorithm is not directly applicable to bounding Quantity (B). We modify the proof techniques in [7] and present the proof details in Appendix. Lemma 1. Assume given the past information Ft−1 and the hyper-parameter to be used by the contextual bandit algorithm at round t, the arm to be pulled follows a fixed distribution. For a random sequence of hyper-parameters {αi1 , . . . , αiT } selected by the EXP3 layer in Algorithm 1, and arm at(αit) is pulled in the contextual bandit layer at round t, we have max α∈J E [ T∑ t=1 µ ( Xt(α|Ft−1)T θ )] −E [ T∑ t=1 µ ( Xt(αit |Ft−1)T θ )] ≤ 2 √ (e− 1)nT log n, where J = {α1, . . . , αn} is the tuning set of the hyper-parameter and |J | = n. To bound Quantity (A), we note that we are not able to use any existing regret bound in the literature directly since the past information Ft−1 here is based on the sequence of arms pulled by our autotuning algorithm instead of the arms generated by using α∗ at each round, and the history would affect the update of bandit algorithms. We overcome this challenge by noticing that the consistency of θ̂t plays a vital role in most of the proofs for (generalized) linear bandits, and this consistency could hold after a warm-up period or with large exploration rate. Therefore, we can expect a tight bound of the cumulative regret by using the same exploration parameter even under another line of observations Ft−1 with sufficient exploration. Another crux of proof is that the regret is usually related to ∥xt∥V −1t , which can be similarly bounded after sufficient exploration. After we bound Quantity (A), combing Lemma 1, we get the following theorem. Theorem 1. Assume given the past information Ft−1 generated from our proposed algorithm for arm selection and the hyper-parameter to be used by the contextual bandits, the arm to be pulled follows a fixed distribution. For UCB and TS based generalized linear bandit algorithms with exploration hyper-parameters (LinUCB, UCB-GLM, LinTS, ect.), the regret bound of Algorithm 1 satisfies (1) E[R(T )] = Õ(T 2/3) +O( √ n(T − T1) log n) given the warm-up length T1 = Õ(T 2/3). (2) For UCB-based bandits, if the theoretical exploration parameter α(T ) is no larger than any element in J , then it holds that E[R(T )] = Õ( √ T ) +O( √ nT log n) with T1 = 0. (3) IfAt is a convex set, and the smallest principal curvature in any neighborhood of the optimal vector xt,∗ ∈ At on At can be lower bounded by some positive constant c, then E[R(T )] = Õ(T 4/7) +O( √ n(T − T1) log n) after a warming-up period of length T1 = O(T 4/7). Remark 1. We could expect a similar result for TS-based bandit algorithms as in Theorem 1 (2), and we offer an intuitive explanation in Appendix 4. Moreover, the conditions in Proposition 1 (3) could be easily verified in many cases. For example, it holds when At = {x ∈ Rd : ∥x∥ ≤ a},∀a > 0. 5 The Syndicated Bandits framework for selecting multiple hyper-parameters There can be multiple hyper-parameters in the contextual bandit algorithm. For example, in linear bandit algorithms such as LinUCB[1, 17] and LinTS [5], exploration parameter α and the regularization parameter of the ridge regression λ are both hyper-parameters to be tuned. In more recent contextual bandit works, there could be even more than two hyper-parameters. For example, NeuralUCB algorithm [28], which is proposed for the contextual bandit problems with a deep neural network model, has many tuning parameters such as the network width, network depth, step size for gradient descent, number of steps for gradient descent, as well as exploration parameter and regularization parameter λ, etc. Another example can be found in [14], where an efficient SGD-TS algorithm is proposed for generalized linear bandits, the number of tuning parameters is also more than two. A naive strategy to auto-tune multiple hyper-parameters is to use Algorithm 1 and let the tuning set J contain all the possible combinations of the hyperparameters. Assume there are in total L hyper-parameters α(1), α(2), . . . , α(L). For all l ∈ [L], if the tuning set for α(l) is defined as Jl = {α(l)1 , . . . , α (l) nl }, where nl is the size of the corresponding tuning set. Then there are in total ΠLl=1nl possible combinations. Based on Lemma 1, the extra regret paid to tune the hyper-parameters (Quantity (B)) is upper bounded by Õ( √ ΠLl=1nlT ). Therefore, the aforementioned naive approach makes the regret increase exponentially with the number of tuning parameters. To mitigate this issue, we propose the Syndicated Bandits framework that can deal with multiple hyper-parameters while avoiding the exponential dependency on the number of tuning parameters in regret bounds. We create L + 1 bandit instances in this framework. In the bottom layer, the contextual bandit algorithm is used to decide which arm to pull. On top of the contextual bandit layer, there are L EXP3 bandits, denoted as EXP3(l) for l ∈ [L]. Each EXP3 algorithm is responsible for tuning one hyper-parameter only. At round t, if it(l) is the index of the hyper-parameters in Jl selected by the EXP3(l) bandit and the selected hyper-parameter is denoted as α(l)it(l) for l ∈ [L], then the contextual bandit algorithm in the bottom layer will use these parameters to make a decision and receive a reward based on the pulled arm. The reward is fed to all the L+ 1 bandits to update the components. Illustration of the algorithm and more details are presented in Figure 1 and Algorithm 2 in Appendix. 5.1 Regret analysis At round t, given all the past information Ft−1, denote at(α(1)j1 , . . . , α (L) jL |Ft−1) as the arm pulled by the contextual bandit algorithm if the parameters are chosen as α(l) = α(l)jl for all l ∈ [L] and let Xt(α (1) j1 , . . . , α (L) jL |Ft−1) be the corresponding feature vector. Recall that µ(·) is the reward-feature model function, then for an arbitrary combination of hyper-parameters (α(1)∗ , . . . , α (L) ∗ ), E[R(T )] = T1∑ t=1 E [ µ ( xTt,∗θ ) − µ ( XTt θ )] + T∑ t=T1+1 E [ µ ( xTt,∗θ ) − µ ( Xt(α (1) ∗ , . . . , α (L) ∗ |Ft−1)T θ )] + T∑ t=T1 E [ µ ( Xt(α (1) ∗ , . . . , α (L) ∗ |Ft−1)T θ ) − µ ( Xt(α (1) it(1) , α (2) ∗ , . . . , α (L) ∗ |Ft−1)T θ )] + T∑ t=T1 E [ µ ( Xt(α (1) it(1) , α (2) ∗ , . . . , α (L) ∗ |Ft−1)T θ ) − µ ( Xt(α (1) it(1) , α (2) it(2) , α (3) ∗ . . . |Ft−1)T θ )] + · · ·+ T∑ t=T1 E [ µ ( Xt(α (1) it(1) , . . . , α (L−1) it(L−1), α (L) ∗ |Ft−1)T θ ) − µ ( Xt(α (1) it(1) , . . . , α (L) it(L) |Ft−1)T θ )] . The first quantity represents the regret from pure exploration. The second quantity in the above decomposition is the regret of the contextual bandit algorithm that runs with the same hyper-parameters α (1) ∗ , . . . , α (L) ∗ under the past history Ft−1 generated from our tuning strategy every round. The next L quantities in the decomposition are the regret from tuning parameters in the EXP3 layer, which can be bounded using similar techniques in Lemma 1. However, the correlations between parameters are more complicated in the analysis now. Formally, we provide the following Theorem to guarantee the performance of the Syndicated Bandits framework. Proofs are deferred to the Appendix. Theorem 2. Assume given the past information Ft−1 and the hyper-parameters to be used by the contextual bandit algorithm at round t, the arm to be pulled by the contextual bandit algorithm follows a fixed distribution. Then the auto tuning method in Algorithm 2 with warm-up length T1 = O(T 2/3) has the following regret bound in general: E[R(T )] ≤ Õ(T 2/3) +O ( L∑ l=1 √ nl(T − T1) log nl ) . Remark 2. Note this result avoids the exponential dependency on the number of hyper-parameters to be tuned in regret. When the hyper-parameters to be tuned are the exploration parameter α and the regularization parameter λ of the (generalized) linear model, we also have the same conclusions as in Theorem 1 (3). Please refer to Appendix A.3 for a formal statement and its proof. Remark 3. Without any assumptions, Algorithm 2 has its regret dependent on d as O(d3 + dT 2/3) for both UCB and TS. In practice, usually d << T . 6 Experimental results We show by experiments that our proposed methods outperform various contextual bandit algorithm using the theoretical exploration parameter, as well as existing tuning methods. We compare different hyper-parameter selection methods in three popular contextual bandit algorithms, LinUCB [1, 17], LinTS [5] and UCB-GLM [18] with a logistic model. In practice, we set the warm-up length as T1 = 0 and tune both exploration parameters and regularization parameters. We compare the following hyper-parameter selection methods. Theoretical-Explore [1]: At round t, this method uses the true theoretical exploration parameter α(t) defined in Equation 3; OP [9]: We make simple modifications of OPLINUCB to make it applicable to tune exploration parameters for LinUCB, LinTS and UCB-GLM; Corral [3]: This method uses the corralling idea to tune the exploration parameter only. Corral-Combined [3]: This method treats bandits with different combinations of the exploration parameter and regularization parameter λ as base model and uses the corralling idea to tune the configurations; TL (Our work, Algorithm 1): This is our proposed Algorithm 1, where we use the two-layer bandit structure to tune the exploration parameter only; TL-Combined (Our work, Algorithm 1): This method tunes both the exploration parameter α and the regularization parameter λ using Algorithm 1, but with the tuning set containing all the possible combinations of α and λ; Syndicated (Our work, Algorithm 2): This method keeps two separate tuning sets for α and λ respectively. It uses the Syndicated Bandits in Algorithm 2. We set the tuning set for exploration parameter α as {0, 0.01, 0.1, 1, 10} and set the tuning set for regularization parameter λ as {0.01, 0.1, 1} in TL-Combined, Corral-Combined and Syndicated. Algorithm 2 The Syndicated Bandits Framework for Auto Tuning Multiple Hyper-parameters Input: time horizon T , warm up length T1, candidate hyper-parameter set {Jl}Ll=1. 1: Randomly choose at ∈ [K] and record Xt, Yt for t ∈ [T1]. 2: Initialize exponential weights w(l)j (T1 + 1) = 1 for t = 1, j = 1, . . . , nl and l = 1, . . . , L. 3: Initialize the parameters for all EXP3 layers as βl = min { 1, √ nl lognl (e−1)T } . 4: for t = (T1 + 1) to T do 5: Update probability distribution for pulling candidates in Jl p (l) j (t) = βl nl + (1− βl) w (l) j (t)∑nl i=1 w (l) i (t) 6: it(l)← j ∈ [nl] with probability p(l)j (t) for all l = 1, . . . , L. 7: Run the contextual bandit algorithm with hyper-parameters α(l) = α(l)it(l) to pull an arm. 8: Observe reward Yt and update the components in contextual bandit algorithms. 9: Update all L EXP3 bandits: ŷ(l)t (j)← 0 if j ̸= it(l). Otherwise, ŷ (l) t (j)← Yt/p (l) j (t). 10: For all l = 1, . . . , L, let w(l)j (t+ 1) = w (l) j (t)× exp ( βl nl ŷ (l) t (j) ) . 11: end for For Theoretical-Explore, OP and TL, since they only tune the exploration parameter, we set the regularization parameter as λ = 1. In all the experiments below, the total number of rounds is T = 10, 000. We run the comparisons on both simulations and the benchmark Movielens 100K real datasets. Due to limited space, the descriptions of the dataset settings are deferred to Appendix A.4. Averaged results over 10 independently repeated experiments are reported below. From Figure 2, we observe: 1) When tuning only one hyper-parameter (exploration parameter in our experiments), the proposed method outperforms previous tuning methods. Further, the theoretical exploration parameter does not perform well and it tends to be too conservative in practice, which is consistent with the results we show in Table 1. 2) When tuning multiple hyper-parameters, previous methods do not apply. We found using the Syndicated Bandits framework usually outperforms TL-Combined and is significantly better than Corral-Combined method which has exponential regret with respect to number of tuning parameters. 3) Using Syndicated Bandits to tune multiple hyperparameters usually outperforms tuning one parameter only. This demonstrates a practical need of auto-tuning multiple hyper-parameters in bandit algorithms. See Appendix for additional experiments on the tuning 3 hyper-parameters in SGD-TS [14]. 7 Conclusion In this paper, we propose a two-layer bandit structure for auto tuning the exploration parameter in contextual bandit algorithms, where the offline tuning is impossible. To further accommodate multiple hyper-parameters tuning tasks in contextual bandit algorithms with complicated models, we generalize our method to the Syndicated Bandits framework. This is the first framework that can auto-tune multiple hyper-parameters dynamically from observations in contextual bandit environment with theoretical regrets that avoids exponential dependency on the total number of hyper-parameters to be tuned. We show that our proposed algorithm can obtain Õ(T 2/3) regret in general and has optimal Õ( √ T ) regret for UCB-based algorithms when the all candidates in the tuning set is greater than the theoretical exploration parameter. Our work is general enough to handle the tuning tasks in many contextual bandit algorithms. Experimental results also validate the effectiveness of our proposed work. Acknowledgments and Disclosure of Funding We are grateful for the insightful comments from the anonymous reviewers and area chair. This work was partially supported by the National Science Foundation under grants CCF-1934568, DMS-1811405, DMS-1811661, DMS-1916125, DMS-2113605, DMS-2210388, IIS-2008173 and IIS-2048280. CJH is also supported by Samsung, Google, Sony and the Okawa Foundation.
1. What is the focus and contribution of the paper regarding contextual bandits? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its empirical evaluation and assumptions? 4. Do you have any concerns or suggestions regarding the candidate hyperparameter set used in the study? 5. How does the reviewer assess the limitations of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work studies the problem of hyperparameter tuning for contextual bandit algorithms. Due to the unique properties of contextual bandits (no pre-collected dataset, and the decision have to be made online), offline tuning methods can not be directly applied. The authors proposed a two-layer framework to automatically tune the exploration hyperparameter in the process of bandit learning. A upper regret bound of O ~ ( T 2 / 3 ) is proved for the proposed method in general. A O ~ ( T 1 / 2 ) is derived for the UCB-based algorithm when the theoretical exploration parameter is no bigger than any elements in the tuning set. Strengths And Weaknesses Strength: The hyperparameter tuning problem is quite important and the proposed method is reasonable. The theoretical analysis is sound to me. Weakness: The empirical evaluation is quite toy. (1) Why not also tuning more advanced bandit models, e.g., NeurUCB? (2) In addition to the Movielens dataset, performing the evaluation on the Yahoo news recommendation dataset will be more convincing. The O ~ ( T 2 / 3 ) upper regret bound is quite weak. About the underlying assumptions on the search space of hyperparameter configurations is not clearly discussed. For example, the candidate hyperparameter set must be finite? If it must be finite, how large it can be? Can it be larger than T? In the experiments, the size of the set is pretty small. It would be helpful and interesting if the author can add more discussions on more on the requirements of the candidate hyperparameter set, both theoretically and theoretically. Questions Why not perform the tuning for NeurUCB? What are the requirements of the candidate hyperparameter set, both theoretically and theoretically Limitations See weakness.
NIPS
Title Syndicated Bandits: A Framework for Auto Tuning Hyper-parameters in Contextual Bandit Algorithms Abstract The stochastic contextual bandit problem, which models the trade-off between exploration and exploitation, has many real applications, including recommender systems, online advertising and clinical trials. As many other machine learning algorithms, contextual bandit algorithms often have one or more hyper-parameters. As an example, in most optimal stochastic contextual bandit algorithms, there is an unknown exploration parameter which controls the trade-off between exploration and exploitation. A proper choice of the hyper-parameters is essential for contextual bandit algorithms to perform well. However, it is infeasible to use offline tuning methods to select hyper-parameters in contextual bandit environment since there is no pre-collected dataset and the decisions have to be made in real time. To tackle this problem, we first propose a two-layer bandit structure for auto tuning the exploration parameter and further generalize it to the Syndicated Bandits framework which can learn multiple hyper-parameters dynamically in contextual bandit environment. We derive the regret bounds of our proposed Syndicated Bandits framework and show it can avoid its regret dependent exponentially in the number of hyper-parameters to be tuned. Moreover, it achieves optimal regret bounds under certain scenarios. Syndicated Bandits framework is general enough to handle the tuning tasks in many popular contextual bandit algorithms, such as LinUCB, LinTS, UCB-GLM, etc. Experiments on both synthetic and real datasets validate the effectiveness of our proposed framework. 1 Introduction The stochastic contextual bandit problem models the well-known exploration-exploitation dilemma in a repeated game between a player and an environment. At each round, the player sequentially interacts with the environment by pulling an arm from a pool of K arms, where every arm is associated with a ∗Work done prior to joining Amazon. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). d-dimensional contextual feature vector. Only the stochastic reward corresponding to the pulled arm is revealed to the player. The goal of the player is to maximize the cumulative reward or minimize the cumulative regret. Due to the partial feedback setting, the player has to balance between exploitation — pulling the arm that has the best estimated reward so far — and exploration — exploring whether there are uncertain arms that may be better than the current estimated best arm. With substantial applications in recommender systems [17], online advertising [20], clinical trials [25], etc., bandit algorithms have been extensively studied during the past few decades. In general, there are two exploration techniques, Upper Confidence Bound (UCB) [6, 17, 18] and Thompson Sampling (TS) [4, 5] algorithms. The UCB algorithm addresses the dilemma optimistically by pulling the arm that has the biggest upper confidence bound. The TS algorithm usually assumes that the relationship between the contextual features and rewards follows a prior model and it uses new observations at each round to estimate the posterior model. The player then pulls the arm that has the best estimated reward based on the posterior model. In general, the contextual bandit problems have regret lower bounded by O( √ T ) [13, 17], where T is the total number of rounds. Both UCB and TS algorithms have been shown to achieve optimal regret bounds in (generalized) linear bandit problems [17, 18], kernelized bandit problems [12] and even in the contextual bandit problem with more complicated models such as neural networks [28]. Despite the popularity of the contextual bandit problem, there are some practical issues that prevent it from being used widely in practice. In both UCB and TS, there are hyper-parameters that are unknown to the player. One of the most important hyper-parameters is the exploration parameter, which controls the trade-off between exploration and exploitation. A good choice of the exploration parameter is essential for the algorithm to perform well and for the theory to hold. Another commonly seen hyper-parameter is the regularization parameter λ in ridge regression or generalized linear model, which is used to model the relationship between features and rewards in (generalized) linear bandits. In contextual bandit problems with complex models such as neural network, the recently proposed NeuralUCB [28] algorithm has far more than just two hyper-parameters. NeuralUCB also needs to select the network width, network depth, step size for gradient descent to solve the neural networks and gradient descent steps, etc. Due to the nature of the bandit environment, where the decisions have to be made in real time, it is inherently difficult to tune the hyper-parameters by the traditional offline tuning methods, such as cross validation, since when you have decided to use a parameter in partial datasets and make a decision based on this, the regret incurred by this decision will never be reversible in contextual bandit environment. In many prominent bandit works [17, 14, 28, 10], the experiments are conducted by running a grid search on the possible choices of parameters and only the best result is reported. Although the performance of the best grid search results is of academic interest, it is not possible to do grid search in practice. In some other works [15], exploration parameter is set as a sufficient, theoretically derived, and often unknown value, but this value may be too conservative and may not achieve good performances in practice, which can be seen from the experiments in Table 1. In this work, we first propose a two-layer bandit structure that can automatically tune the exploration parameter dynamically from the observed data. The two-layer bandit structure has its similarities to the bandit-over-bandit (BOB) algorithm [11] proposed for the non-stationary stochastic bandit problems, where it uses BOB idea to successfully adapt its sliding-window sizes by restarting the algorithms in epochs. Motivated by the two-layer bandit structure we propose in Section 4, we generalize it to the “Syndicated Bandits” framework where there could be multiple hyper-parameters to be tuned in the contextual bandit algorithm. We provide theoretical guarantees for our framework and show that our proposed auto tuning method in general has regret upper bound Õ(T 2/3) + Õ( ∑L l=1 √ nlT ). Here L is the total number of hyper-parameters to be tuned and nl is the number of candidates in the tuning set of the l-th hyper-parameter. When the unknown theoretical exploration parameter is no bigger than any elements in the tuning set, our proposed framework has optimal regret upper bound Õ( √ T ) + Õ( ∑L l=1 √ nlT ) for UCB-based algorithms. Our framework is general enough to handle tuning tasks in many contextual bandit algorithms as long as the arms to be pulled at round t follows a fixed distribution given the hyper-parameters to be used at this round and the past information. This includes many popular contextual bandit algorithms such as Linear UCB (LinUCB) [17, 1], Linear TS (LinTS) [5, 10], UCB-GLM [18], etc. Our proposed Syndicated Bandits framework is the first work that considers tuning multiple parameters dynamically from observations in the contextual bandit problems with theoretical guarantees. We provide a regret bound that avoids the exponential dependency on the total number of hyper-parameters to be tuned. This is one of the main contributions of our proposed work. In Section 6, we show by experiments that our proposed framework improves over existing works, as well as the bandit algorithms that use the unknown theoretically derived exploration parameter. 2 Related work There is a rich line of works on multi-armed bandit (MAB) and stochastic contextual bandit algorithms, including (generalized) linear bandits, kernelized bandits and neural bandits, etc. Most of them follow the UCB and TS exploration techniques. We refer the readers to [17, 18, 4, 5, 10, 12, 28] for the seminal works regarding the bandit problems. There are many previous works that utilize algorithms in the stochastic MAB [23] setting to solve the hyper-parameter optimization problem [21, 19]. There are also some online hyper-parameter tuning works such as [24], however, those mainly focuses on reducing the training cost for tuning parameters of neural networks online and they are not considering minimizing the cumulative regret in contextual bandit problems. In the following, we will only pay attention to related works on the tuning tasks in stochastic contextual bandits. [22] proposed a meta-learning method for learning exploration parameters in contextual bandit problems. It learns a good exploration strategy in synthetic datasets and applies it to the real contextual bandit problems by an imitation study. The meta-learning algorithm is compared with seven baseline contextual bandit algorithms and achieves good empirical results. We note that this algorithm cannot learn the exploration parameters adaptively from observations in the contextual bandit environment. In [9], the authors first proposed OPLINUCB and DOPLINUCB algorithms to learn exploration parameters dynamically. OPLINUCB treats the possible choices of hyper-parameters as arms and uses a standard MAB TS algorithm to choose parameters. It then uses the chosen parameter in the contextual bandit algorithm. However, this method does not have theoretical guarantee in general, since the MAB TS only works when the rewards of the candidate hyper-parameters in the tuning set stay stationary over time. For hyper-parameter selections in contextual bandit problems, the best exploration parameter does not stay the same all the time. This is because in later rounds, when the learning is sophisticated, less exploration is better. However, in the beginning, more exploration is preferred due to the uncertainty. This non-stationary nature in tuning hyper-parameters makes the performance of OPLINUCB unstable in practice. DOPLINUCB is a similar tuning method as OPLINUCB, except that it uses the CTree algorithm to select hyper-parameters at each round. It is shown in [9] that DOPLINUCB does not outperform OPLINUCB in stationary contetxual bandit environments, where the reward-feature model does not change over time. Another close line of literature is on model selections in bandit algorithms. [16] tackles the feature selection problem in bandit algorithms and achieve O(T 2/3d1/3∗ ) where d∗ is the total number of optimal features. [3] uses the corralling idea to create a master algorithm to choose the best bandit model from a set of M base models. Hyper-parameter tuning problem can be formulated as a model selection problem in [3], where we can treat bandit algorithms with different hyper-parameters as the base models. The theoretical regret bound of the corralling idea [3] is O( √ MT +MRmax), where M is the total number of base models and Rmax is the maximum regret of M base models if they were to run alone. This means that the regret bound will be exponentially dependent on the total number of hyper-parameters to be tuned. In addition, if there is one hyper-parameter in the tuning set that gives linear regret of the algorithm, then Rmax is linear in T which makes the corralling idea have linear regret in worst case. Our algorithm is also much more efficient than the corralling idea when M is big. The corralling idea requires updating all M base models/ algorithms at each round. However, our algorithm only needs to update the selected model/ bandit algorithm with selected hyper-parameter at each round. When the time complexity of updating the model/ algorithm is big, the corralling idea is expensive. For example, if we tune configurations for UCB-GLM, the corralling idea needs O(MT 2d) time, while the time complexity of our algorithm is only O(MT + T 2d). We address here that none of the previous works can tune multiple parameters dynamically from observations. Although OPLINUCB [9] and the corralling idea [3] can treat all the hyper-parameters as a single parameter and set the tuning set as all the possible combinations of hyper-parameters. This will lead to exponential number of configurations which may not be efficient in both computation and theoretical regret bounds. Our proposed Syndicated framework avoids the exponential regret bound. Notations: For a vector x ∈ Rd, we use ∥x∥ to denote its l2 norm and ∥x∥A := √ xTAx for a positive-definite matrix A ∈ Rd×d. Finally, we denote [n] := {1, 2, . . . , n}. 3 Preliminaries We study the hyper-parameter selection tasks in a stochastic contextual bandit problem with K arms, where K can be an infinite number. Assume there are in total T rounds, at each round t ∈ [T ], the player is given K arms, represented by a set of feature vectors At = {xt,a|a ∈ [K]} ⊂ Rd, At is drawn IID from an unknown distribution with ∥xt,a∥ ≤ 1 for all t ∈ [T ] and a ∈ [K], where xt,a is a d-dimensional feature vector that contains the information of arm a at round t. The player makes a decision by pulling an arm at ∈ [K] based on At and past observations. We make a common regularity assumption as in [14, 18], i.e. there exists a constant σ0 > 0 such that λmin ( E[ 1k ∑k a=1 xt,ax ⊤ t,a] ) > σ0. The player can only observe the rewards of the pulled arms. Denote Xt := xt,at as the feature vector of the pulled arm at round t and Yt the corresponding reward. We assume the expected rewards and features follow a model E[Yt|Xt] = µ(XTt θ∗), where µ(·) is a known model function and θ∗ is the true but unknown model parameter. When µ(x) = x, this becomes the well-studied linear bandits problem. When µ(·) is a generalized linear model or a neural network, this becomes the generalized linear bandits (GLB) and neural bandits respectively. Without loss of generality, we assume that there exists a positive constant S such that ∥θ∗∥ ≤ S. We also assume the mean rewards µ(xTt,aθ ∗) ∈ [0, 1] and observed rewards Yt ∈ [0, 1]. This is a noncritical assumption, which can be easily relaxed to any bounded interval. IfFt = σ({as,As, Ys}ts=1∪ At+1) is the information up to round t, we assume the observed rewards follow a sub-Gaussian distribution with parameter σ2, i.e., Yt = µ(XTt θ ∗) + ϵt, where ϵt are independent random noises that satisfy E[ebϵt |Ft−1] ≤ b 2σ2 2 for all t and b ∈ R. Denote a ∗ t = argmaxa∈[K] µ(X T t θ ∗) as the optimal arm at round t and xt,∗ as its corresponding feature, the goal is to minimize the cumulative regret over T rounds defined in the following equation. R(T ) = T∑ t=1 [ µ(xTt,∗θ ∗)− µ(XTt θ∗) ] . (1) For linear bandits where µ(x) = x, classic bandit algorithms such as LinUCB [1, 17] and LinTS [2] compute an estimate of the model parameter θ̂t using ridge regression with regularization parameter λ > 0, i.e., θ̂t = V −1t ∑t−1 s=1 XsYs, where Vt = λId + ∑t−1 s=1 XsX T s . Shown by [1], with probability at least 1− δ, the true model parameter θ∗ is contained in the following confidence set, Ct = { θ ∈ Rd : ∥θ − θ̂t∥Vt ≤ α(t) } , (2) where α(t) = σ √ d log ( 1 + t/λ δ ) + S √ λ. (3) To balance the trade-off between exploration and exploitation, there are in general two techniques. For example, in linear bandits, LinUCB explores optimistically by pulling the arm with the maximum upper confidence bound, while LinTS adds randomization by drawing a sample model from the posterior distribution and pulls an arm based on it. at = argmax a xTt,aθ̂t + α(t)∥xt,a∥V −1t , (LinUCB) θTSt ∼ N(θ̂t, α(t)2V −1t ) and at = argmax a xTt,aθ TS t . (LinTS) In the following, we call α(t) the exploration parameter. As suggested by the theories in [1, 17], a conservative choice of the exploration parameter is to follow Equation 3. However, in Equation 3, the upper bound of the l2 norm of the model parameter S and the sub-Gaussian parameter σ are unknown to the player, which makes it difficult to track theoretical choices of the exploration parameter. In Table 1, we show the cumulative regret of LinUCB [1, 17] and LinTS [5] in a simulation study with d = 5, T = 10000 and K = 100. Rewards are simulated from N(xTt,aθ ∗, 0.5). The model parameter θ∗ and feature vectors xt,a are drawn from Uniform(− 1√d , 1√ d ). Two scenarios are considered in this table. In the first scenario, the feature of each arm keeps the same over T rounds. While in the second scenario, the features are re-simulated from Uniform(− 1√ d , 1√ d ) at different rounds. We run a grid search of the exploration parameter in {0, 0.5, 1, . . . , 10} and report the best grid search result, as well as the results using the theoretical exploration parameter given by Equation 3 (last column in Table 1). As we shall see in Table 1, the best exploration parameter is not the same for different scenarios. Therefore, which exploration parameter to use is an instance-dependent problem and the best exploration parameter should always be chosen dynamically based on the observations. Meanwhile, theoretical exploration parameters do not always give the best performances from Table 1. On the other hand, in many other works where the model of contextual bandit problem is more complex, such as the generalized linear bandit [14], neural bandit [28], there may be many more hyper-parameters than just α(t). 4 A two-layer bandit structure for tuning exploration parameters In the previous section, we have discussed that the best hyper-parameters should be instant-dependent. In this section, we propose a two-layer bandit structure to automatically learn the best hyper-parameter from data at each round. We will take learning the best exploration parameter as an example. However, we want to emphasize that this structure can also be applied to learn other single hyper-parameter. We randomly select arms for the first T1 rounds to warm up the algorithm. For all rounds later, in this two-layer bandit structure, the top layer follows an adversarial MAB policy, namely, the EXP3 algorithm [7]. Assume J is the tuning set of all the possible exploration parameters. At each round t > T1, the top layer will select a candidate exploration parameter αit ∈ J , where αi is the i-th element in the set J and it is the selected index at round t. The bottom layer runs the contextual bandit algorithm based on the selected exploration parameter αit . Details are listed in Algorithm 1. 4.1 Regret analysis Given all the past information Ft−1, denote at(αj |Ft−1) as the pulled arm when the exploration parameter is αj at round t. Denote Xt(αj |Ft−1) = xt,at(αj |Ft−1) as the corresponding feature vector under Ft−1. Note that in our algorithm, Xt := Xt(αit |Ft−1) when t > T1. To analyze the cumulative regret, we first decompose the regret defined in Equation 1 into three parts: E[R(T )] = E [ T∑ t=1 ( µ ( xt,∗ T θ ) − µ ( XTt θ ))] = E [ T∑ t=T1+1 ( µ ( xt,∗ T θ ) − µ ( Xt(α ∗|Ft−1)T θ ))] ︸ ︷︷ ︸ Quantity (A) +E [ T∑ t=T1+1 ( µ ( Xt(α ∗|Ft−1)T θ ) − µ ( Xt(αit |Ft−1)T θ ))] ︸ ︷︷ ︸ Quantity (B) +E [ T1∑ t=1 ( µ ( xt,∗ T θ ) − µ ( XTt θ ))] ︸ ︷︷ ︸ Quantity (C) , where µ(·) is the reward-feature model function and α∗ ∈ J is some arbitrary candidate exploration parameter in J . Quantity (A) is the regret of the contextual bandit algorithm that runs with the same hyper-parameter α∗ under the past history Ft−1 generated from our tuning strategy every round. Quantity (B) is the extra regret paid to tune the hyper-parameter. Quantity (C) is the regret paid for random exploration in warm-up phases and is controlled by the scale of O(T1). We show by Lemma 1 and Theorem 1 below that our auto tuning method in Algorithm 1 does not cost too much in selecting parameters in most scenarios under mild conditions. Algorithm 1 A Two-layer Auto Tuning Algorithm Input: time horizon T , warm-up length T1, candidate hyper-parameter set J = {αi}ni=1. 1: Randomly choose at ∈ [K] and record Xt, Yt for t ∈ [T1]. 2: Initialize exponential weights wj(T1 + 1) = 1 for j = 1, . . . , n. 3: Initialize the exploration parameter for EXP3 as β = min { 1, √ n logn (e−1)T } . 4: for t = (T1 + 1) to T do 5: Update probability distribution for pulling candidates in J pj(t) = β n + (1− β) wj(t)∑n i=1 wi(t) 6: it ← j ∈ [n] with probability pj(t). 7: Run the contextual bandit algorithm with hyper-parameter α(t) = αit to pull an arm. For example, pull arms according to the following equations at = argmax a=1,...,K xTt,aθ̂t + αit∥xt,a∥V −1t (LinUCB) θTSt ∼ N(θ̂t, α2itV −1 t ) and at = argmax a xTt,aθ TS t . (LinTS) 8: Observe reward Yt and update the components in the contextual bandit algorithm. 9: Update EXP3 components: ŷt(j)← 0 if j ̸= it, ŷt(j)← Yt/pj(t) if j = it, and wj(t+ 1) = wj(t)× exp ( β n ŷt(j) ) . 10: end for Since the arms pulled by the contextual bandit layer also affect the update of the EXP3 layer in Algorithm 1, the result of EXP3 algorithm is not directly applicable to bounding Quantity (B). We modify the proof techniques in [7] and present the proof details in Appendix. Lemma 1. Assume given the past information Ft−1 and the hyper-parameter to be used by the contextual bandit algorithm at round t, the arm to be pulled follows a fixed distribution. For a random sequence of hyper-parameters {αi1 , . . . , αiT } selected by the EXP3 layer in Algorithm 1, and arm at(αit) is pulled in the contextual bandit layer at round t, we have max α∈J E [ T∑ t=1 µ ( Xt(α|Ft−1)T θ )] −E [ T∑ t=1 µ ( Xt(αit |Ft−1)T θ )] ≤ 2 √ (e− 1)nT log n, where J = {α1, . . . , αn} is the tuning set of the hyper-parameter and |J | = n. To bound Quantity (A), we note that we are not able to use any existing regret bound in the literature directly since the past information Ft−1 here is based on the sequence of arms pulled by our autotuning algorithm instead of the arms generated by using α∗ at each round, and the history would affect the update of bandit algorithms. We overcome this challenge by noticing that the consistency of θ̂t plays a vital role in most of the proofs for (generalized) linear bandits, and this consistency could hold after a warm-up period or with large exploration rate. Therefore, we can expect a tight bound of the cumulative regret by using the same exploration parameter even under another line of observations Ft−1 with sufficient exploration. Another crux of proof is that the regret is usually related to ∥xt∥V −1t , which can be similarly bounded after sufficient exploration. After we bound Quantity (A), combing Lemma 1, we get the following theorem. Theorem 1. Assume given the past information Ft−1 generated from our proposed algorithm for arm selection and the hyper-parameter to be used by the contextual bandits, the arm to be pulled follows a fixed distribution. For UCB and TS based generalized linear bandit algorithms with exploration hyper-parameters (LinUCB, UCB-GLM, LinTS, ect.), the regret bound of Algorithm 1 satisfies (1) E[R(T )] = Õ(T 2/3) +O( √ n(T − T1) log n) given the warm-up length T1 = Õ(T 2/3). (2) For UCB-based bandits, if the theoretical exploration parameter α(T ) is no larger than any element in J , then it holds that E[R(T )] = Õ( √ T ) +O( √ nT log n) with T1 = 0. (3) IfAt is a convex set, and the smallest principal curvature in any neighborhood of the optimal vector xt,∗ ∈ At on At can be lower bounded by some positive constant c, then E[R(T )] = Õ(T 4/7) +O( √ n(T − T1) log n) after a warming-up period of length T1 = O(T 4/7). Remark 1. We could expect a similar result for TS-based bandit algorithms as in Theorem 1 (2), and we offer an intuitive explanation in Appendix 4. Moreover, the conditions in Proposition 1 (3) could be easily verified in many cases. For example, it holds when At = {x ∈ Rd : ∥x∥ ≤ a},∀a > 0. 5 The Syndicated Bandits framework for selecting multiple hyper-parameters There can be multiple hyper-parameters in the contextual bandit algorithm. For example, in linear bandit algorithms such as LinUCB[1, 17] and LinTS [5], exploration parameter α and the regularization parameter of the ridge regression λ are both hyper-parameters to be tuned. In more recent contextual bandit works, there could be even more than two hyper-parameters. For example, NeuralUCB algorithm [28], which is proposed for the contextual bandit problems with a deep neural network model, has many tuning parameters such as the network width, network depth, step size for gradient descent, number of steps for gradient descent, as well as exploration parameter and regularization parameter λ, etc. Another example can be found in [14], where an efficient SGD-TS algorithm is proposed for generalized linear bandits, the number of tuning parameters is also more than two. A naive strategy to auto-tune multiple hyper-parameters is to use Algorithm 1 and let the tuning set J contain all the possible combinations of the hyperparameters. Assume there are in total L hyper-parameters α(1), α(2), . . . , α(L). For all l ∈ [L], if the tuning set for α(l) is defined as Jl = {α(l)1 , . . . , α (l) nl }, where nl is the size of the corresponding tuning set. Then there are in total ΠLl=1nl possible combinations. Based on Lemma 1, the extra regret paid to tune the hyper-parameters (Quantity (B)) is upper bounded by Õ( √ ΠLl=1nlT ). Therefore, the aforementioned naive approach makes the regret increase exponentially with the number of tuning parameters. To mitigate this issue, we propose the Syndicated Bandits framework that can deal with multiple hyper-parameters while avoiding the exponential dependency on the number of tuning parameters in regret bounds. We create L + 1 bandit instances in this framework. In the bottom layer, the contextual bandit algorithm is used to decide which arm to pull. On top of the contextual bandit layer, there are L EXP3 bandits, denoted as EXP3(l) for l ∈ [L]. Each EXP3 algorithm is responsible for tuning one hyper-parameter only. At round t, if it(l) is the index of the hyper-parameters in Jl selected by the EXP3(l) bandit and the selected hyper-parameter is denoted as α(l)it(l) for l ∈ [L], then the contextual bandit algorithm in the bottom layer will use these parameters to make a decision and receive a reward based on the pulled arm. The reward is fed to all the L+ 1 bandits to update the components. Illustration of the algorithm and more details are presented in Figure 1 and Algorithm 2 in Appendix. 5.1 Regret analysis At round t, given all the past information Ft−1, denote at(α(1)j1 , . . . , α (L) jL |Ft−1) as the arm pulled by the contextual bandit algorithm if the parameters are chosen as α(l) = α(l)jl for all l ∈ [L] and let Xt(α (1) j1 , . . . , α (L) jL |Ft−1) be the corresponding feature vector. Recall that µ(·) is the reward-feature model function, then for an arbitrary combination of hyper-parameters (α(1)∗ , . . . , α (L) ∗ ), E[R(T )] = T1∑ t=1 E [ µ ( xTt,∗θ ) − µ ( XTt θ )] + T∑ t=T1+1 E [ µ ( xTt,∗θ ) − µ ( Xt(α (1) ∗ , . . . , α (L) ∗ |Ft−1)T θ )] + T∑ t=T1 E [ µ ( Xt(α (1) ∗ , . . . , α (L) ∗ |Ft−1)T θ ) − µ ( Xt(α (1) it(1) , α (2) ∗ , . . . , α (L) ∗ |Ft−1)T θ )] + T∑ t=T1 E [ µ ( Xt(α (1) it(1) , α (2) ∗ , . . . , α (L) ∗ |Ft−1)T θ ) − µ ( Xt(α (1) it(1) , α (2) it(2) , α (3) ∗ . . . |Ft−1)T θ )] + · · ·+ T∑ t=T1 E [ µ ( Xt(α (1) it(1) , . . . , α (L−1) it(L−1), α (L) ∗ |Ft−1)T θ ) − µ ( Xt(α (1) it(1) , . . . , α (L) it(L) |Ft−1)T θ )] . The first quantity represents the regret from pure exploration. The second quantity in the above decomposition is the regret of the contextual bandit algorithm that runs with the same hyper-parameters α (1) ∗ , . . . , α (L) ∗ under the past history Ft−1 generated from our tuning strategy every round. The next L quantities in the decomposition are the regret from tuning parameters in the EXP3 layer, which can be bounded using similar techniques in Lemma 1. However, the correlations between parameters are more complicated in the analysis now. Formally, we provide the following Theorem to guarantee the performance of the Syndicated Bandits framework. Proofs are deferred to the Appendix. Theorem 2. Assume given the past information Ft−1 and the hyper-parameters to be used by the contextual bandit algorithm at round t, the arm to be pulled by the contextual bandit algorithm follows a fixed distribution. Then the auto tuning method in Algorithm 2 with warm-up length T1 = O(T 2/3) has the following regret bound in general: E[R(T )] ≤ Õ(T 2/3) +O ( L∑ l=1 √ nl(T − T1) log nl ) . Remark 2. Note this result avoids the exponential dependency on the number of hyper-parameters to be tuned in regret. When the hyper-parameters to be tuned are the exploration parameter α and the regularization parameter λ of the (generalized) linear model, we also have the same conclusions as in Theorem 1 (3). Please refer to Appendix A.3 for a formal statement and its proof. Remark 3. Without any assumptions, Algorithm 2 has its regret dependent on d as O(d3 + dT 2/3) for both UCB and TS. In practice, usually d << T . 6 Experimental results We show by experiments that our proposed methods outperform various contextual bandit algorithm using the theoretical exploration parameter, as well as existing tuning methods. We compare different hyper-parameter selection methods in three popular contextual bandit algorithms, LinUCB [1, 17], LinTS [5] and UCB-GLM [18] with a logistic model. In practice, we set the warm-up length as T1 = 0 and tune both exploration parameters and regularization parameters. We compare the following hyper-parameter selection methods. Theoretical-Explore [1]: At round t, this method uses the true theoretical exploration parameter α(t) defined in Equation 3; OP [9]: We make simple modifications of OPLINUCB to make it applicable to tune exploration parameters for LinUCB, LinTS and UCB-GLM; Corral [3]: This method uses the corralling idea to tune the exploration parameter only. Corral-Combined [3]: This method treats bandits with different combinations of the exploration parameter and regularization parameter λ as base model and uses the corralling idea to tune the configurations; TL (Our work, Algorithm 1): This is our proposed Algorithm 1, where we use the two-layer bandit structure to tune the exploration parameter only; TL-Combined (Our work, Algorithm 1): This method tunes both the exploration parameter α and the regularization parameter λ using Algorithm 1, but with the tuning set containing all the possible combinations of α and λ; Syndicated (Our work, Algorithm 2): This method keeps two separate tuning sets for α and λ respectively. It uses the Syndicated Bandits in Algorithm 2. We set the tuning set for exploration parameter α as {0, 0.01, 0.1, 1, 10} and set the tuning set for regularization parameter λ as {0.01, 0.1, 1} in TL-Combined, Corral-Combined and Syndicated. Algorithm 2 The Syndicated Bandits Framework for Auto Tuning Multiple Hyper-parameters Input: time horizon T , warm up length T1, candidate hyper-parameter set {Jl}Ll=1. 1: Randomly choose at ∈ [K] and record Xt, Yt for t ∈ [T1]. 2: Initialize exponential weights w(l)j (T1 + 1) = 1 for t = 1, j = 1, . . . , nl and l = 1, . . . , L. 3: Initialize the parameters for all EXP3 layers as βl = min { 1, √ nl lognl (e−1)T } . 4: for t = (T1 + 1) to T do 5: Update probability distribution for pulling candidates in Jl p (l) j (t) = βl nl + (1− βl) w (l) j (t)∑nl i=1 w (l) i (t) 6: it(l)← j ∈ [nl] with probability p(l)j (t) for all l = 1, . . . , L. 7: Run the contextual bandit algorithm with hyper-parameters α(l) = α(l)it(l) to pull an arm. 8: Observe reward Yt and update the components in contextual bandit algorithms. 9: Update all L EXP3 bandits: ŷ(l)t (j)← 0 if j ̸= it(l). Otherwise, ŷ (l) t (j)← Yt/p (l) j (t). 10: For all l = 1, . . . , L, let w(l)j (t+ 1) = w (l) j (t)× exp ( βl nl ŷ (l) t (j) ) . 11: end for For Theoretical-Explore, OP and TL, since they only tune the exploration parameter, we set the regularization parameter as λ = 1. In all the experiments below, the total number of rounds is T = 10, 000. We run the comparisons on both simulations and the benchmark Movielens 100K real datasets. Due to limited space, the descriptions of the dataset settings are deferred to Appendix A.4. Averaged results over 10 independently repeated experiments are reported below. From Figure 2, we observe: 1) When tuning only one hyper-parameter (exploration parameter in our experiments), the proposed method outperforms previous tuning methods. Further, the theoretical exploration parameter does not perform well and it tends to be too conservative in practice, which is consistent with the results we show in Table 1. 2) When tuning multiple hyper-parameters, previous methods do not apply. We found using the Syndicated Bandits framework usually outperforms TL-Combined and is significantly better than Corral-Combined method which has exponential regret with respect to number of tuning parameters. 3) Using Syndicated Bandits to tune multiple hyperparameters usually outperforms tuning one parameter only. This demonstrates a practical need of auto-tuning multiple hyper-parameters in bandit algorithms. See Appendix for additional experiments on the tuning 3 hyper-parameters in SGD-TS [14]. 7 Conclusion In this paper, we propose a two-layer bandit structure for auto tuning the exploration parameter in contextual bandit algorithms, where the offline tuning is impossible. To further accommodate multiple hyper-parameters tuning tasks in contextual bandit algorithms with complicated models, we generalize our method to the Syndicated Bandits framework. This is the first framework that can auto-tune multiple hyper-parameters dynamically from observations in contextual bandit environment with theoretical regrets that avoids exponential dependency on the total number of hyper-parameters to be tuned. We show that our proposed algorithm can obtain Õ(T 2/3) regret in general and has optimal Õ( √ T ) regret for UCB-based algorithms when the all candidates in the tuning set is greater than the theoretical exploration parameter. Our work is general enough to handle the tuning tasks in many contextual bandit algorithms. Experimental results also validate the effectiveness of our proposed work. Acknowledgments and Disclosure of Funding We are grateful for the insightful comments from the anonymous reviewers and area chair. This work was partially supported by the National Science Foundation under grants CCF-1934568, DMS-1811405, DMS-1811661, DMS-1916125, DMS-2113605, DMS-2210388, IIS-2008173 and IIS-2048280. CJH is also supported by Samsung, Google, Sony and the Okawa Foundation.
1. What is the focus and contribution of the paper regarding the tuning of hyperparameters in contextual bandits algorithms? 2. What are the strengths of the proposed approach, particularly in terms of its practical interest and regret bound analysis? 3. What are the weaknesses of the paper, especially regarding the lack of intuition and empirical results for more hyperparameters? 4. Do you have any concerns or questions about the connection between the syndicated bandits and multi-agent settings? 5. What are your thoughts on the limitation of the experimental results and the need for more test cases with additional hyperparameters?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposed a two-layer bandits framework for automatic tuning of multiple hyper-parameters in contextual bandits algorithms. The bottom layer hosts the original bandits algorithm (UCB or TS-based), and the top layer is a set of EXP3 bandits algorithm, each of which is responsible for tuning one of the hyper-parameter in the parameter set. By sending the same global reward signals from the bottom layer to each of the top-layer bandits algorithm, the paper shows that the resulting regret bound has a linear (and not exponential) dependency on the number of hyper-parameters to be tuned. Experiments on synthetic and real-world data are performed for benchmarking and showing the advantage of the proposed framework in terms of cumulative regrets. Strengths And Weaknesses Strengths This paper can be of significant practical interest to a broad audience group who are interested in and using bandits algorithms. The auto-tuning algorithm for single and multiple hyper-parameters with a desirable regret bound provides a very useful tool for successful deployment of bandits algorithms. The proposed algorithm comes with a regret bound analysis that offers assurance that its learning efficiency is much better than naively treating the multiple hyper-parameters as a single parameter with more options. Weaknesses Besides theoretical analysis of the regret bound, it is also important to provide intuition on why the change from using a single top-layer bandit to multiple parallel bandits can allow us to avoid the exponential dependency of regret on the number of hyper-parameters. The experiments were all conducted on test cases with only two parameters. It would be great to have concrete examples of more hyper-parameters and associated empirical results. Questions The syndicated bandits share similarity to a multi-agent setting, where each agent performs its dedicated task but the agents share the same global reward signals so as to act cooperatively. I wonder if you could draw any connection here to explain why it works much better in regret bound than using a single agent to tune the multiple parameters. In the experiments, there are only two hyper-parameters with 15 possible combinations. Is the advantage seen in the plot from Syndicated over TL-combined is due to the limited number of iterations? It would be better and more convincing to demonstrate the advantage of Syndicated if there were a test case with more hyper-parameters. Limitations N.A.
NIPS
Title Syndicated Bandits: A Framework for Auto Tuning Hyper-parameters in Contextual Bandit Algorithms Abstract The stochastic contextual bandit problem, which models the trade-off between exploration and exploitation, has many real applications, including recommender systems, online advertising and clinical trials. As many other machine learning algorithms, contextual bandit algorithms often have one or more hyper-parameters. As an example, in most optimal stochastic contextual bandit algorithms, there is an unknown exploration parameter which controls the trade-off between exploration and exploitation. A proper choice of the hyper-parameters is essential for contextual bandit algorithms to perform well. However, it is infeasible to use offline tuning methods to select hyper-parameters in contextual bandit environment since there is no pre-collected dataset and the decisions have to be made in real time. To tackle this problem, we first propose a two-layer bandit structure for auto tuning the exploration parameter and further generalize it to the Syndicated Bandits framework which can learn multiple hyper-parameters dynamically in contextual bandit environment. We derive the regret bounds of our proposed Syndicated Bandits framework and show it can avoid its regret dependent exponentially in the number of hyper-parameters to be tuned. Moreover, it achieves optimal regret bounds under certain scenarios. Syndicated Bandits framework is general enough to handle the tuning tasks in many popular contextual bandit algorithms, such as LinUCB, LinTS, UCB-GLM, etc. Experiments on both synthetic and real datasets validate the effectiveness of our proposed framework. 1 Introduction The stochastic contextual bandit problem models the well-known exploration-exploitation dilemma in a repeated game between a player and an environment. At each round, the player sequentially interacts with the environment by pulling an arm from a pool of K arms, where every arm is associated with a ∗Work done prior to joining Amazon. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). d-dimensional contextual feature vector. Only the stochastic reward corresponding to the pulled arm is revealed to the player. The goal of the player is to maximize the cumulative reward or minimize the cumulative regret. Due to the partial feedback setting, the player has to balance between exploitation — pulling the arm that has the best estimated reward so far — and exploration — exploring whether there are uncertain arms that may be better than the current estimated best arm. With substantial applications in recommender systems [17], online advertising [20], clinical trials [25], etc., bandit algorithms have been extensively studied during the past few decades. In general, there are two exploration techniques, Upper Confidence Bound (UCB) [6, 17, 18] and Thompson Sampling (TS) [4, 5] algorithms. The UCB algorithm addresses the dilemma optimistically by pulling the arm that has the biggest upper confidence bound. The TS algorithm usually assumes that the relationship between the contextual features and rewards follows a prior model and it uses new observations at each round to estimate the posterior model. The player then pulls the arm that has the best estimated reward based on the posterior model. In general, the contextual bandit problems have regret lower bounded by O( √ T ) [13, 17], where T is the total number of rounds. Both UCB and TS algorithms have been shown to achieve optimal regret bounds in (generalized) linear bandit problems [17, 18], kernelized bandit problems [12] and even in the contextual bandit problem with more complicated models such as neural networks [28]. Despite the popularity of the contextual bandit problem, there are some practical issues that prevent it from being used widely in practice. In both UCB and TS, there are hyper-parameters that are unknown to the player. One of the most important hyper-parameters is the exploration parameter, which controls the trade-off between exploration and exploitation. A good choice of the exploration parameter is essential for the algorithm to perform well and for the theory to hold. Another commonly seen hyper-parameter is the regularization parameter λ in ridge regression or generalized linear model, which is used to model the relationship between features and rewards in (generalized) linear bandits. In contextual bandit problems with complex models such as neural network, the recently proposed NeuralUCB [28] algorithm has far more than just two hyper-parameters. NeuralUCB also needs to select the network width, network depth, step size for gradient descent to solve the neural networks and gradient descent steps, etc. Due to the nature of the bandit environment, where the decisions have to be made in real time, it is inherently difficult to tune the hyper-parameters by the traditional offline tuning methods, such as cross validation, since when you have decided to use a parameter in partial datasets and make a decision based on this, the regret incurred by this decision will never be reversible in contextual bandit environment. In many prominent bandit works [17, 14, 28, 10], the experiments are conducted by running a grid search on the possible choices of parameters and only the best result is reported. Although the performance of the best grid search results is of academic interest, it is not possible to do grid search in practice. In some other works [15], exploration parameter is set as a sufficient, theoretically derived, and often unknown value, but this value may be too conservative and may not achieve good performances in practice, which can be seen from the experiments in Table 1. In this work, we first propose a two-layer bandit structure that can automatically tune the exploration parameter dynamically from the observed data. The two-layer bandit structure has its similarities to the bandit-over-bandit (BOB) algorithm [11] proposed for the non-stationary stochastic bandit problems, where it uses BOB idea to successfully adapt its sliding-window sizes by restarting the algorithms in epochs. Motivated by the two-layer bandit structure we propose in Section 4, we generalize it to the “Syndicated Bandits” framework where there could be multiple hyper-parameters to be tuned in the contextual bandit algorithm. We provide theoretical guarantees for our framework and show that our proposed auto tuning method in general has regret upper bound Õ(T 2/3) + Õ( ∑L l=1 √ nlT ). Here L is the total number of hyper-parameters to be tuned and nl is the number of candidates in the tuning set of the l-th hyper-parameter. When the unknown theoretical exploration parameter is no bigger than any elements in the tuning set, our proposed framework has optimal regret upper bound Õ( √ T ) + Õ( ∑L l=1 √ nlT ) for UCB-based algorithms. Our framework is general enough to handle tuning tasks in many contextual bandit algorithms as long as the arms to be pulled at round t follows a fixed distribution given the hyper-parameters to be used at this round and the past information. This includes many popular contextual bandit algorithms such as Linear UCB (LinUCB) [17, 1], Linear TS (LinTS) [5, 10], UCB-GLM [18], etc. Our proposed Syndicated Bandits framework is the first work that considers tuning multiple parameters dynamically from observations in the contextual bandit problems with theoretical guarantees. We provide a regret bound that avoids the exponential dependency on the total number of hyper-parameters to be tuned. This is one of the main contributions of our proposed work. In Section 6, we show by experiments that our proposed framework improves over existing works, as well as the bandit algorithms that use the unknown theoretically derived exploration parameter. 2 Related work There is a rich line of works on multi-armed bandit (MAB) and stochastic contextual bandit algorithms, including (generalized) linear bandits, kernelized bandits and neural bandits, etc. Most of them follow the UCB and TS exploration techniques. We refer the readers to [17, 18, 4, 5, 10, 12, 28] for the seminal works regarding the bandit problems. There are many previous works that utilize algorithms in the stochastic MAB [23] setting to solve the hyper-parameter optimization problem [21, 19]. There are also some online hyper-parameter tuning works such as [24], however, those mainly focuses on reducing the training cost for tuning parameters of neural networks online and they are not considering minimizing the cumulative regret in contextual bandit problems. In the following, we will only pay attention to related works on the tuning tasks in stochastic contextual bandits. [22] proposed a meta-learning method for learning exploration parameters in contextual bandit problems. It learns a good exploration strategy in synthetic datasets and applies it to the real contextual bandit problems by an imitation study. The meta-learning algorithm is compared with seven baseline contextual bandit algorithms and achieves good empirical results. We note that this algorithm cannot learn the exploration parameters adaptively from observations in the contextual bandit environment. In [9], the authors first proposed OPLINUCB and DOPLINUCB algorithms to learn exploration parameters dynamically. OPLINUCB treats the possible choices of hyper-parameters as arms and uses a standard MAB TS algorithm to choose parameters. It then uses the chosen parameter in the contextual bandit algorithm. However, this method does not have theoretical guarantee in general, since the MAB TS only works when the rewards of the candidate hyper-parameters in the tuning set stay stationary over time. For hyper-parameter selections in contextual bandit problems, the best exploration parameter does not stay the same all the time. This is because in later rounds, when the learning is sophisticated, less exploration is better. However, in the beginning, more exploration is preferred due to the uncertainty. This non-stationary nature in tuning hyper-parameters makes the performance of OPLINUCB unstable in practice. DOPLINUCB is a similar tuning method as OPLINUCB, except that it uses the CTree algorithm to select hyper-parameters at each round. It is shown in [9] that DOPLINUCB does not outperform OPLINUCB in stationary contetxual bandit environments, where the reward-feature model does not change over time. Another close line of literature is on model selections in bandit algorithms. [16] tackles the feature selection problem in bandit algorithms and achieve O(T 2/3d1/3∗ ) where d∗ is the total number of optimal features. [3] uses the corralling idea to create a master algorithm to choose the best bandit model from a set of M base models. Hyper-parameter tuning problem can be formulated as a model selection problem in [3], where we can treat bandit algorithms with different hyper-parameters as the base models. The theoretical regret bound of the corralling idea [3] is O( √ MT +MRmax), where M is the total number of base models and Rmax is the maximum regret of M base models if they were to run alone. This means that the regret bound will be exponentially dependent on the total number of hyper-parameters to be tuned. In addition, if there is one hyper-parameter in the tuning set that gives linear regret of the algorithm, then Rmax is linear in T which makes the corralling idea have linear regret in worst case. Our algorithm is also much more efficient than the corralling idea when M is big. The corralling idea requires updating all M base models/ algorithms at each round. However, our algorithm only needs to update the selected model/ bandit algorithm with selected hyper-parameter at each round. When the time complexity of updating the model/ algorithm is big, the corralling idea is expensive. For example, if we tune configurations for UCB-GLM, the corralling idea needs O(MT 2d) time, while the time complexity of our algorithm is only O(MT + T 2d). We address here that none of the previous works can tune multiple parameters dynamically from observations. Although OPLINUCB [9] and the corralling idea [3] can treat all the hyper-parameters as a single parameter and set the tuning set as all the possible combinations of hyper-parameters. This will lead to exponential number of configurations which may not be efficient in both computation and theoretical regret bounds. Our proposed Syndicated framework avoids the exponential regret bound. Notations: For a vector x ∈ Rd, we use ∥x∥ to denote its l2 norm and ∥x∥A := √ xTAx for a positive-definite matrix A ∈ Rd×d. Finally, we denote [n] := {1, 2, . . . , n}. 3 Preliminaries We study the hyper-parameter selection tasks in a stochastic contextual bandit problem with K arms, where K can be an infinite number. Assume there are in total T rounds, at each round t ∈ [T ], the player is given K arms, represented by a set of feature vectors At = {xt,a|a ∈ [K]} ⊂ Rd, At is drawn IID from an unknown distribution with ∥xt,a∥ ≤ 1 for all t ∈ [T ] and a ∈ [K], where xt,a is a d-dimensional feature vector that contains the information of arm a at round t. The player makes a decision by pulling an arm at ∈ [K] based on At and past observations. We make a common regularity assumption as in [14, 18], i.e. there exists a constant σ0 > 0 such that λmin ( E[ 1k ∑k a=1 xt,ax ⊤ t,a] ) > σ0. The player can only observe the rewards of the pulled arms. Denote Xt := xt,at as the feature vector of the pulled arm at round t and Yt the corresponding reward. We assume the expected rewards and features follow a model E[Yt|Xt] = µ(XTt θ∗), where µ(·) is a known model function and θ∗ is the true but unknown model parameter. When µ(x) = x, this becomes the well-studied linear bandits problem. When µ(·) is a generalized linear model or a neural network, this becomes the generalized linear bandits (GLB) and neural bandits respectively. Without loss of generality, we assume that there exists a positive constant S such that ∥θ∗∥ ≤ S. We also assume the mean rewards µ(xTt,aθ ∗) ∈ [0, 1] and observed rewards Yt ∈ [0, 1]. This is a noncritical assumption, which can be easily relaxed to any bounded interval. IfFt = σ({as,As, Ys}ts=1∪ At+1) is the information up to round t, we assume the observed rewards follow a sub-Gaussian distribution with parameter σ2, i.e., Yt = µ(XTt θ ∗) + ϵt, where ϵt are independent random noises that satisfy E[ebϵt |Ft−1] ≤ b 2σ2 2 for all t and b ∈ R. Denote a ∗ t = argmaxa∈[K] µ(X T t θ ∗) as the optimal arm at round t and xt,∗ as its corresponding feature, the goal is to minimize the cumulative regret over T rounds defined in the following equation. R(T ) = T∑ t=1 [ µ(xTt,∗θ ∗)− µ(XTt θ∗) ] . (1) For linear bandits where µ(x) = x, classic bandit algorithms such as LinUCB [1, 17] and LinTS [2] compute an estimate of the model parameter θ̂t using ridge regression with regularization parameter λ > 0, i.e., θ̂t = V −1t ∑t−1 s=1 XsYs, where Vt = λId + ∑t−1 s=1 XsX T s . Shown by [1], with probability at least 1− δ, the true model parameter θ∗ is contained in the following confidence set, Ct = { θ ∈ Rd : ∥θ − θ̂t∥Vt ≤ α(t) } , (2) where α(t) = σ √ d log ( 1 + t/λ δ ) + S √ λ. (3) To balance the trade-off between exploration and exploitation, there are in general two techniques. For example, in linear bandits, LinUCB explores optimistically by pulling the arm with the maximum upper confidence bound, while LinTS adds randomization by drawing a sample model from the posterior distribution and pulls an arm based on it. at = argmax a xTt,aθ̂t + α(t)∥xt,a∥V −1t , (LinUCB) θTSt ∼ N(θ̂t, α(t)2V −1t ) and at = argmax a xTt,aθ TS t . (LinTS) In the following, we call α(t) the exploration parameter. As suggested by the theories in [1, 17], a conservative choice of the exploration parameter is to follow Equation 3. However, in Equation 3, the upper bound of the l2 norm of the model parameter S and the sub-Gaussian parameter σ are unknown to the player, which makes it difficult to track theoretical choices of the exploration parameter. In Table 1, we show the cumulative regret of LinUCB [1, 17] and LinTS [5] in a simulation study with d = 5, T = 10000 and K = 100. Rewards are simulated from N(xTt,aθ ∗, 0.5). The model parameter θ∗ and feature vectors xt,a are drawn from Uniform(− 1√d , 1√ d ). Two scenarios are considered in this table. In the first scenario, the feature of each arm keeps the same over T rounds. While in the second scenario, the features are re-simulated from Uniform(− 1√ d , 1√ d ) at different rounds. We run a grid search of the exploration parameter in {0, 0.5, 1, . . . , 10} and report the best grid search result, as well as the results using the theoretical exploration parameter given by Equation 3 (last column in Table 1). As we shall see in Table 1, the best exploration parameter is not the same for different scenarios. Therefore, which exploration parameter to use is an instance-dependent problem and the best exploration parameter should always be chosen dynamically based on the observations. Meanwhile, theoretical exploration parameters do not always give the best performances from Table 1. On the other hand, in many other works where the model of contextual bandit problem is more complex, such as the generalized linear bandit [14], neural bandit [28], there may be many more hyper-parameters than just α(t). 4 A two-layer bandit structure for tuning exploration parameters In the previous section, we have discussed that the best hyper-parameters should be instant-dependent. In this section, we propose a two-layer bandit structure to automatically learn the best hyper-parameter from data at each round. We will take learning the best exploration parameter as an example. However, we want to emphasize that this structure can also be applied to learn other single hyper-parameter. We randomly select arms for the first T1 rounds to warm up the algorithm. For all rounds later, in this two-layer bandit structure, the top layer follows an adversarial MAB policy, namely, the EXP3 algorithm [7]. Assume J is the tuning set of all the possible exploration parameters. At each round t > T1, the top layer will select a candidate exploration parameter αit ∈ J , where αi is the i-th element in the set J and it is the selected index at round t. The bottom layer runs the contextual bandit algorithm based on the selected exploration parameter αit . Details are listed in Algorithm 1. 4.1 Regret analysis Given all the past information Ft−1, denote at(αj |Ft−1) as the pulled arm when the exploration parameter is αj at round t. Denote Xt(αj |Ft−1) = xt,at(αj |Ft−1) as the corresponding feature vector under Ft−1. Note that in our algorithm, Xt := Xt(αit |Ft−1) when t > T1. To analyze the cumulative regret, we first decompose the regret defined in Equation 1 into three parts: E[R(T )] = E [ T∑ t=1 ( µ ( xt,∗ T θ ) − µ ( XTt θ ))] = E [ T∑ t=T1+1 ( µ ( xt,∗ T θ ) − µ ( Xt(α ∗|Ft−1)T θ ))] ︸ ︷︷ ︸ Quantity (A) +E [ T∑ t=T1+1 ( µ ( Xt(α ∗|Ft−1)T θ ) − µ ( Xt(αit |Ft−1)T θ ))] ︸ ︷︷ ︸ Quantity (B) +E [ T1∑ t=1 ( µ ( xt,∗ T θ ) − µ ( XTt θ ))] ︸ ︷︷ ︸ Quantity (C) , where µ(·) is the reward-feature model function and α∗ ∈ J is some arbitrary candidate exploration parameter in J . Quantity (A) is the regret of the contextual bandit algorithm that runs with the same hyper-parameter α∗ under the past history Ft−1 generated from our tuning strategy every round. Quantity (B) is the extra regret paid to tune the hyper-parameter. Quantity (C) is the regret paid for random exploration in warm-up phases and is controlled by the scale of O(T1). We show by Lemma 1 and Theorem 1 below that our auto tuning method in Algorithm 1 does not cost too much in selecting parameters in most scenarios under mild conditions. Algorithm 1 A Two-layer Auto Tuning Algorithm Input: time horizon T , warm-up length T1, candidate hyper-parameter set J = {αi}ni=1. 1: Randomly choose at ∈ [K] and record Xt, Yt for t ∈ [T1]. 2: Initialize exponential weights wj(T1 + 1) = 1 for j = 1, . . . , n. 3: Initialize the exploration parameter for EXP3 as β = min { 1, √ n logn (e−1)T } . 4: for t = (T1 + 1) to T do 5: Update probability distribution for pulling candidates in J pj(t) = β n + (1− β) wj(t)∑n i=1 wi(t) 6: it ← j ∈ [n] with probability pj(t). 7: Run the contextual bandit algorithm with hyper-parameter α(t) = αit to pull an arm. For example, pull arms according to the following equations at = argmax a=1,...,K xTt,aθ̂t + αit∥xt,a∥V −1t (LinUCB) θTSt ∼ N(θ̂t, α2itV −1 t ) and at = argmax a xTt,aθ TS t . (LinTS) 8: Observe reward Yt and update the components in the contextual bandit algorithm. 9: Update EXP3 components: ŷt(j)← 0 if j ̸= it, ŷt(j)← Yt/pj(t) if j = it, and wj(t+ 1) = wj(t)× exp ( β n ŷt(j) ) . 10: end for Since the arms pulled by the contextual bandit layer also affect the update of the EXP3 layer in Algorithm 1, the result of EXP3 algorithm is not directly applicable to bounding Quantity (B). We modify the proof techniques in [7] and present the proof details in Appendix. Lemma 1. Assume given the past information Ft−1 and the hyper-parameter to be used by the contextual bandit algorithm at round t, the arm to be pulled follows a fixed distribution. For a random sequence of hyper-parameters {αi1 , . . . , αiT } selected by the EXP3 layer in Algorithm 1, and arm at(αit) is pulled in the contextual bandit layer at round t, we have max α∈J E [ T∑ t=1 µ ( Xt(α|Ft−1)T θ )] −E [ T∑ t=1 µ ( Xt(αit |Ft−1)T θ )] ≤ 2 √ (e− 1)nT log n, where J = {α1, . . . , αn} is the tuning set of the hyper-parameter and |J | = n. To bound Quantity (A), we note that we are not able to use any existing regret bound in the literature directly since the past information Ft−1 here is based on the sequence of arms pulled by our autotuning algorithm instead of the arms generated by using α∗ at each round, and the history would affect the update of bandit algorithms. We overcome this challenge by noticing that the consistency of θ̂t plays a vital role in most of the proofs for (generalized) linear bandits, and this consistency could hold after a warm-up period or with large exploration rate. Therefore, we can expect a tight bound of the cumulative regret by using the same exploration parameter even under another line of observations Ft−1 with sufficient exploration. Another crux of proof is that the regret is usually related to ∥xt∥V −1t , which can be similarly bounded after sufficient exploration. After we bound Quantity (A), combing Lemma 1, we get the following theorem. Theorem 1. Assume given the past information Ft−1 generated from our proposed algorithm for arm selection and the hyper-parameter to be used by the contextual bandits, the arm to be pulled follows a fixed distribution. For UCB and TS based generalized linear bandit algorithms with exploration hyper-parameters (LinUCB, UCB-GLM, LinTS, ect.), the regret bound of Algorithm 1 satisfies (1) E[R(T )] = Õ(T 2/3) +O( √ n(T − T1) log n) given the warm-up length T1 = Õ(T 2/3). (2) For UCB-based bandits, if the theoretical exploration parameter α(T ) is no larger than any element in J , then it holds that E[R(T )] = Õ( √ T ) +O( √ nT log n) with T1 = 0. (3) IfAt is a convex set, and the smallest principal curvature in any neighborhood of the optimal vector xt,∗ ∈ At on At can be lower bounded by some positive constant c, then E[R(T )] = Õ(T 4/7) +O( √ n(T − T1) log n) after a warming-up period of length T1 = O(T 4/7). Remark 1. We could expect a similar result for TS-based bandit algorithms as in Theorem 1 (2), and we offer an intuitive explanation in Appendix 4. Moreover, the conditions in Proposition 1 (3) could be easily verified in many cases. For example, it holds when At = {x ∈ Rd : ∥x∥ ≤ a},∀a > 0. 5 The Syndicated Bandits framework for selecting multiple hyper-parameters There can be multiple hyper-parameters in the contextual bandit algorithm. For example, in linear bandit algorithms such as LinUCB[1, 17] and LinTS [5], exploration parameter α and the regularization parameter of the ridge regression λ are both hyper-parameters to be tuned. In more recent contextual bandit works, there could be even more than two hyper-parameters. For example, NeuralUCB algorithm [28], which is proposed for the contextual bandit problems with a deep neural network model, has many tuning parameters such as the network width, network depth, step size for gradient descent, number of steps for gradient descent, as well as exploration parameter and regularization parameter λ, etc. Another example can be found in [14], where an efficient SGD-TS algorithm is proposed for generalized linear bandits, the number of tuning parameters is also more than two. A naive strategy to auto-tune multiple hyper-parameters is to use Algorithm 1 and let the tuning set J contain all the possible combinations of the hyperparameters. Assume there are in total L hyper-parameters α(1), α(2), . . . , α(L). For all l ∈ [L], if the tuning set for α(l) is defined as Jl = {α(l)1 , . . . , α (l) nl }, where nl is the size of the corresponding tuning set. Then there are in total ΠLl=1nl possible combinations. Based on Lemma 1, the extra regret paid to tune the hyper-parameters (Quantity (B)) is upper bounded by Õ( √ ΠLl=1nlT ). Therefore, the aforementioned naive approach makes the regret increase exponentially with the number of tuning parameters. To mitigate this issue, we propose the Syndicated Bandits framework that can deal with multiple hyper-parameters while avoiding the exponential dependency on the number of tuning parameters in regret bounds. We create L + 1 bandit instances in this framework. In the bottom layer, the contextual bandit algorithm is used to decide which arm to pull. On top of the contextual bandit layer, there are L EXP3 bandits, denoted as EXP3(l) for l ∈ [L]. Each EXP3 algorithm is responsible for tuning one hyper-parameter only. At round t, if it(l) is the index of the hyper-parameters in Jl selected by the EXP3(l) bandit and the selected hyper-parameter is denoted as α(l)it(l) for l ∈ [L], then the contextual bandit algorithm in the bottom layer will use these parameters to make a decision and receive a reward based on the pulled arm. The reward is fed to all the L+ 1 bandits to update the components. Illustration of the algorithm and more details are presented in Figure 1 and Algorithm 2 in Appendix. 5.1 Regret analysis At round t, given all the past information Ft−1, denote at(α(1)j1 , . . . , α (L) jL |Ft−1) as the arm pulled by the contextual bandit algorithm if the parameters are chosen as α(l) = α(l)jl for all l ∈ [L] and let Xt(α (1) j1 , . . . , α (L) jL |Ft−1) be the corresponding feature vector. Recall that µ(·) is the reward-feature model function, then for an arbitrary combination of hyper-parameters (α(1)∗ , . . . , α (L) ∗ ), E[R(T )] = T1∑ t=1 E [ µ ( xTt,∗θ ) − µ ( XTt θ )] + T∑ t=T1+1 E [ µ ( xTt,∗θ ) − µ ( Xt(α (1) ∗ , . . . , α (L) ∗ |Ft−1)T θ )] + T∑ t=T1 E [ µ ( Xt(α (1) ∗ , . . . , α (L) ∗ |Ft−1)T θ ) − µ ( Xt(α (1) it(1) , α (2) ∗ , . . . , α (L) ∗ |Ft−1)T θ )] + T∑ t=T1 E [ µ ( Xt(α (1) it(1) , α (2) ∗ , . . . , α (L) ∗ |Ft−1)T θ ) − µ ( Xt(α (1) it(1) , α (2) it(2) , α (3) ∗ . . . |Ft−1)T θ )] + · · ·+ T∑ t=T1 E [ µ ( Xt(α (1) it(1) , . . . , α (L−1) it(L−1), α (L) ∗ |Ft−1)T θ ) − µ ( Xt(α (1) it(1) , . . . , α (L) it(L) |Ft−1)T θ )] . The first quantity represents the regret from pure exploration. The second quantity in the above decomposition is the regret of the contextual bandit algorithm that runs with the same hyper-parameters α (1) ∗ , . . . , α (L) ∗ under the past history Ft−1 generated from our tuning strategy every round. The next L quantities in the decomposition are the regret from tuning parameters in the EXP3 layer, which can be bounded using similar techniques in Lemma 1. However, the correlations between parameters are more complicated in the analysis now. Formally, we provide the following Theorem to guarantee the performance of the Syndicated Bandits framework. Proofs are deferred to the Appendix. Theorem 2. Assume given the past information Ft−1 and the hyper-parameters to be used by the contextual bandit algorithm at round t, the arm to be pulled by the contextual bandit algorithm follows a fixed distribution. Then the auto tuning method in Algorithm 2 with warm-up length T1 = O(T 2/3) has the following regret bound in general: E[R(T )] ≤ Õ(T 2/3) +O ( L∑ l=1 √ nl(T − T1) log nl ) . Remark 2. Note this result avoids the exponential dependency on the number of hyper-parameters to be tuned in regret. When the hyper-parameters to be tuned are the exploration parameter α and the regularization parameter λ of the (generalized) linear model, we also have the same conclusions as in Theorem 1 (3). Please refer to Appendix A.3 for a formal statement and its proof. Remark 3. Without any assumptions, Algorithm 2 has its regret dependent on d as O(d3 + dT 2/3) for both UCB and TS. In practice, usually d << T . 6 Experimental results We show by experiments that our proposed methods outperform various contextual bandit algorithm using the theoretical exploration parameter, as well as existing tuning methods. We compare different hyper-parameter selection methods in three popular contextual bandit algorithms, LinUCB [1, 17], LinTS [5] and UCB-GLM [18] with a logistic model. In practice, we set the warm-up length as T1 = 0 and tune both exploration parameters and regularization parameters. We compare the following hyper-parameter selection methods. Theoretical-Explore [1]: At round t, this method uses the true theoretical exploration parameter α(t) defined in Equation 3; OP [9]: We make simple modifications of OPLINUCB to make it applicable to tune exploration parameters for LinUCB, LinTS and UCB-GLM; Corral [3]: This method uses the corralling idea to tune the exploration parameter only. Corral-Combined [3]: This method treats bandits with different combinations of the exploration parameter and regularization parameter λ as base model and uses the corralling idea to tune the configurations; TL (Our work, Algorithm 1): This is our proposed Algorithm 1, where we use the two-layer bandit structure to tune the exploration parameter only; TL-Combined (Our work, Algorithm 1): This method tunes both the exploration parameter α and the regularization parameter λ using Algorithm 1, but with the tuning set containing all the possible combinations of α and λ; Syndicated (Our work, Algorithm 2): This method keeps two separate tuning sets for α and λ respectively. It uses the Syndicated Bandits in Algorithm 2. We set the tuning set for exploration parameter α as {0, 0.01, 0.1, 1, 10} and set the tuning set for regularization parameter λ as {0.01, 0.1, 1} in TL-Combined, Corral-Combined and Syndicated. Algorithm 2 The Syndicated Bandits Framework for Auto Tuning Multiple Hyper-parameters Input: time horizon T , warm up length T1, candidate hyper-parameter set {Jl}Ll=1. 1: Randomly choose at ∈ [K] and record Xt, Yt for t ∈ [T1]. 2: Initialize exponential weights w(l)j (T1 + 1) = 1 for t = 1, j = 1, . . . , nl and l = 1, . . . , L. 3: Initialize the parameters for all EXP3 layers as βl = min { 1, √ nl lognl (e−1)T } . 4: for t = (T1 + 1) to T do 5: Update probability distribution for pulling candidates in Jl p (l) j (t) = βl nl + (1− βl) w (l) j (t)∑nl i=1 w (l) i (t) 6: it(l)← j ∈ [nl] with probability p(l)j (t) for all l = 1, . . . , L. 7: Run the contextual bandit algorithm with hyper-parameters α(l) = α(l)it(l) to pull an arm. 8: Observe reward Yt and update the components in contextual bandit algorithms. 9: Update all L EXP3 bandits: ŷ(l)t (j)← 0 if j ̸= it(l). Otherwise, ŷ (l) t (j)← Yt/p (l) j (t). 10: For all l = 1, . . . , L, let w(l)j (t+ 1) = w (l) j (t)× exp ( βl nl ŷ (l) t (j) ) . 11: end for For Theoretical-Explore, OP and TL, since they only tune the exploration parameter, we set the regularization parameter as λ = 1. In all the experiments below, the total number of rounds is T = 10, 000. We run the comparisons on both simulations and the benchmark Movielens 100K real datasets. Due to limited space, the descriptions of the dataset settings are deferred to Appendix A.4. Averaged results over 10 independently repeated experiments are reported below. From Figure 2, we observe: 1) When tuning only one hyper-parameter (exploration parameter in our experiments), the proposed method outperforms previous tuning methods. Further, the theoretical exploration parameter does not perform well and it tends to be too conservative in practice, which is consistent with the results we show in Table 1. 2) When tuning multiple hyper-parameters, previous methods do not apply. We found using the Syndicated Bandits framework usually outperforms TL-Combined and is significantly better than Corral-Combined method which has exponential regret with respect to number of tuning parameters. 3) Using Syndicated Bandits to tune multiple hyperparameters usually outperforms tuning one parameter only. This demonstrates a practical need of auto-tuning multiple hyper-parameters in bandit algorithms. See Appendix for additional experiments on the tuning 3 hyper-parameters in SGD-TS [14]. 7 Conclusion In this paper, we propose a two-layer bandit structure for auto tuning the exploration parameter in contextual bandit algorithms, where the offline tuning is impossible. To further accommodate multiple hyper-parameters tuning tasks in contextual bandit algorithms with complicated models, we generalize our method to the Syndicated Bandits framework. This is the first framework that can auto-tune multiple hyper-parameters dynamically from observations in contextual bandit environment with theoretical regrets that avoids exponential dependency on the total number of hyper-parameters to be tuned. We show that our proposed algorithm can obtain Õ(T 2/3) regret in general and has optimal Õ( √ T ) regret for UCB-based algorithms when the all candidates in the tuning set is greater than the theoretical exploration parameter. Our work is general enough to handle the tuning tasks in many contextual bandit algorithms. Experimental results also validate the effectiveness of our proposed work. Acknowledgments and Disclosure of Funding We are grateful for the insightful comments from the anonymous reviewers and area chair. This work was partially supported by the National Science Foundation under grants CCF-1934568, DMS-1811405, DMS-1811661, DMS-1916125, DMS-2113605, DMS-2210388, IIS-2008173 and IIS-2048280. CJH is also supported by Samsung, Google, Sony and the Okawa Foundation.
1. What is the main contribution of the paper regarding bandit hyperparameter auto-tuning? 2. What are the strengths of the proposed approach, particularly in its applicability to various bandit algorithms? 3. What is the reviewer's concern or question regarding the selection of the best bandit algorithms from a candidate pool? 4. How does the reviewer assess the clarity and effectiveness of the presented approach in addressing the exploration-exploitation tradeoff? 5. Are there any limitations or potential areas for improvement in the proposed framework, such as the issue with selecting the optimal pair of (T, T1)?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper addressed an important and practical problem for bandit algorithms: how to auto-tune the exploration parameter to automate exploration-exploitation tradeoff to release the full power of bandit algorithms. The authors contributes knowledge on bandit hyperparameter auto-tuning with a bandit-over-bandit framework, applicable to important bandit algorithms class including LinUCB and LinTs. The proposed framework shares same flavor of EXP3 algorithms, in the sense that the superiority of a candidate exploration parameter is captured by a quantity as weight and then competes with other candidates. The validity of presented approach is backed by regret analysis and experiments. Strengths And Weaknesses Strengths: The writing is clear and easy to follow. The authors did great job on presenting evidences of their claims, and the message is clear conveyed. Weaknesses: I am satisfied with the submitted version. Questions I have one question came from my personal interest. It would be of my great appreciation if the authors share their insights. My question came from picking "one" class of bandit algorithms from a pool of candidate bandit algorithms". You may already share same experience that, in your Algorithm 1 and 2, the " time horizon T and warm up length T1" are typically subject to change in practice. We may pick T and T1 subjectively, and I think we don't have a good strategy to find a pair of "great (T , T1)" to pick the "best bandit algorithms from candidate pool". Your framework invites to think about the possibility of "bandit-ovet-bandit-over-bandit" flavor framework to do automatic decision making. It would be great if you can share your thoughts on this point, if you also feel this is an interesting problem. Limitations The only interesting limitation came to my mind is the problem "bandit algorithm candidate auto selection" I elaborated in the questions section. This is not count as a limitation of current submission.
NIPS
Title Class-Incremental Learning via Dual Augmentation Abstract Deep learning systems typically suffer from catastrophic forgetting of past knowledge when acquiring new skills continually. In this paper, we emphasize two dilemmas, representation bias and classifier bias in class-incremental learning, and present a simple and novel approach that employs explicit class augmentation (classAug) and implicit semantic augmentation (semanAug) to address the two biases, respectively. On the one hand, we propose to address the representation bias by learning transferable and diverse representations. Specifically, we investigate the feature representations in incremental learning based on spectral analysis and present a simple technique called classAug, to let the model see more classes during training for learning representations transferable across classes. On the other hand, to overcome the classifier bias, semanAug implicitly involves the simultaneous generating of an infinite number of instances of old classes in the deep feature space, which poses tighter constraints to maintain the decision boundary of previously learned classes. Without storing any old samples, our method can perform comparably with representative data replay based approaches. 1 Introduction Deep neural networks (DNNs) have enabled great success in many machine learning tasks, based on stationary, large-scale, computationally expensive, and memory-intensive training data [1, 2, 3]. Yet the need of the ability to acquire sequential experience in dynamic and open environments [4, 5, 6] poses a serious challenge to modern deep learning systems, which only perform well on homogenized, balanced, and shuffled data [7]. Typically, DNNs suffer from drastic performance degradation of previously learned tasks after learning new knowledge, which is a well-documented phenomenon, known as catastrophic forgetting [8, 9, 10]. Recently, incremental learning (IL), also referred to as lifelong learning or continual learning, has received extensive attention [11, 12, 13, 14] to enable DNNs to preserve and extend knowledge continually. Many earlier studies focus on task-incremental learning, which uses separate output layers for different tasks, and needs the task identity for inference [11, 15, 16]. In this work, we consider a more realistic and challenging setting of class-incremental learning (Class-IL), where the model only has access to data of new classes at each stage and needs to learn a unified classifier that can classify all seen classes [13, 17, 18]. Unfortunately, the learning paradigm of Class-IL will lead to two problems: representation bias and classifier bias, as shown in Figure 1. First, for representation learning, if the feature extractor is fixed after learning old classes, the learned representations could be preserved, but suffer from the lack of transferability for new classes; on the contrary, if we update the feature extractor on new classes, the updated representations would be no longer suitable for old classes. Consequently, the old and new classes would be easily overlapped in the deep feature space. We ∗Corresponding author. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). denote this dilemma as the representation bias. Second, to distinguish new classes from old classes, the training loss is typically calculated on all classes. Without old training data, the class weights of old classes would be ill-updated and mismatched with the updated representation space. We denote this dilemma as the classifier bias. In this work, we investigate the learning of representation and classifier in incremental learning and propose a simple and effective dual augmentation framework to overcome these two biases in Class-IL without storing and replaying training data of old classes. Learning Representation for Incremental Learning. Existing works typically regularize network parameters explicitly [11, 15, 16] or implicitly [12] to reduce the representation shift when learning new classes. In this paper, instead of asking how to keep previously learned representations unchanged, we investigate the following question: What properties of learned representations could facilitate incremental learning? We hypothesize that learning transferable and diverse representations is an important requirement for incremental learning. Intuitively, with such representations, it could be easier to find a model to perform well on all tasks and improve both plasticity and stability, since different tasks would be closer in the parameters space. From a spectral analysis viewpoint, we investigate which components of feature representations are more transferable and less forgettable in the incremental learning process. It is found that spectral components with large eigenvalues are less forgettable. Furthermore, we exploit this finding to propose a simple technique named classAug, which can enlarge the spectral components to introduce more diverse and transferable representations for incremental learning. Learning Classifier for Incremental Learning. Recently, several works were proposed to alleviate the classifier bias in data replay based methods [18, 19, 20]. However, in non-exemplar based (i.e., without storing and replaying old data) Class-IL setting, the classifier bias is more serious and the above methods can not be directly used. A straightforward way is storing instances of old classes in the deep feature space. However, this strategy is undesirable due to the limited memory resource and scalability. This work delves into the classifier learning for Class-IL and proposes an implicit semantic augmentation (semanAug) approach to generate an infinite number of instances of old classes in the deep feature space by leveraging the distribution information. SemanAug is inspired by MCF [21] and ISDA [22], which have performed semantic augmentation for linear models and DNNs, respectively. However, both our way to leverage semantic augmentation and the motivation fundamentally differ from them [21, 22]. Contributions. (i) We provide new insights into the representation learning in incremental learning by analyzing the structural characteristics of the learned embedding space via spectral decomposition and find that spectral components with large eigenvalues are less forgettable and carry more transferable features. Based on this observation, we propose a simple and effective method of classAug to learn better embedding space for incremental learning. (ii) For classifier learning in incremental learning, we propose semanAug which implicitly involves simultaneous generating an infinite number of instances of old classes in the deep feature space to maintain the decision boundary of previously learned classes. (iii) Extensive experiments on benchmark datasets demonstrate the superior performance of our dual augmentation framework for the challenging scenario of Class-IL. 2 Related Work Incremental Learning. Diverse approaches have been proposed for incremental learning of DNNs. They can be roughly divided into three categories: regularization based, data replay based, and architecture based approaches. Regularization based methods focus on weight regularization by estimating and preventing the important network weights from changing [11, 15, 16]. The difference among those methods is the way to compute the importance of the parameters. However, it is hard to design a reasonable metric to measure the importance of parameters, and it is known that regularization strategies show poor performance in Class-IL scenario [23, 24]. Data replay based methods address both the representation bias and classifier bias straightforwardly by storing a fraction of old data to jointly train the model with current data. With stored real samples, some works [17, 13, 25] use a distillation loss to prevent forgetting, while others [26, 27, 28] develop gradient-based regularization to make more efficient use of the rehearsal data. To avoid storing real data, another line of works generates pseudo-samples of all previous classes for replay using deep generative models [29, 30, 31, 32]. Nevertheless, storing real data is undesirable for resource-limited or privacy and safety concerning scenarios. Moreover, training big generative models for complex datasets is inefficient. Architecture based methods dynamically extend the network structure during the course of incremental learning [33, 34, 35, 36]. However, growing architecture is unfeasible for large numbers of tasks, and those methods are often impractical for Class-IL. Data Augmentation. Literature is rich on data augmentation for improving the generalization of DNNs. Classical strategies commonly synthesize “positive” new samples in a way that is consistent with the underlying data distribution of the original dataset [3]. Recent works show that label mixing based methods such as Mixup [37] and Cutmix [38] can greatly improve the generalization of DNNs. In complement to the input space augmentations mentioned above, some works have explored feature space augmentations which augment the learned representations in deep embedding space to enhance classifier performance. The intuition behind those works is that certain directions in the deep feature space correspond to meaningful semantic transformations [39, 40]. For instance, deep feature interpolation [40] leverages simple interpolations in the embedding space to achieve semantic augmentation. A recently proposed ISDA [22] performs semantic augmentation by estimating and leveraging the category-wise distribution of deep representations in an online manner. Despite the simplicity, ISDA shows its effectiveness in semi-supervised learning [22], contrastive learning [41], domain adaptation [42] and long-tailed recognition [43]. 3 Dual Augmentation Framework for Class-Incremental Learning We first formalize the problem of Class-IL, and then introduce the proposed classAug for representation learning and semanAug for classifier learning, respectively. Finally, we present the dual augmentation framework for Class-IL by combing the two augmentations. Problem Definition. Typically, a Class-IL problem involves the sequential learning of T tasks that consist of disjoint classes sets, and the model has to classify all seen classes at any given point in training. At incremental step t ∈ {1, ..., T }, (x, y) ∈ Dt denotes a training sample, where x is an sample in the input space X and y ∈ Ct is its corresponding label. Ct is the class set of task t. To facilitate analysis, we represent the DNN based model with two components: a feature extractor and a unified classifier. Specifically, the feature extractor fθ : X → Z , parameterized by θ, maps the input x into a feature vector z = fθ(x) ∈ Rd in the deep feature space Z; the unified classifier gϕ : Z → RC1:t , parameterized by ϕ, produces a probability distribution gϕ(z) as the prediction for x. Denote the overall parameters by Θ = (θ,ϕ). The general objective is to correctly classify test examples from all seen classes [44]. The key challenge of Class-IL is that data from previous tasks are assumed to be unavailable, which means that the best configuration of the model for all seen tasks must be sought by minimizing the predefined loss function L (e.g., cross-entropy) on current data Dt: argmin θ,ϕ E(x,y)vDt [L(gϕ(fθ(x)), y)]. (1) A widely used strategy to preserve old knowledge is knowledge distillation [45], which typically matches the current model with previous model response to current training data using the teacherstudent framework [12, 13, 19]. 3.1 Learning Representation with Class Augmentation As we focus on non-exemplar based Class-IL, we intentionally avoid storing training samples of old classes. To maintain the generalizability of the learned representations for old classes, existing methods typically restrain the feature extractor from changing [11, 15, 16, 12]. However, this would lead to a trade-off between the plasticity and stability [5], and it would be hard to perform long-step incremental learning. Our high-level idea is to learn transferable and diverse representations to bridge the old and new classes in a better feature space. To delve into this problem, we want to answer two questions: (1) Which part of feature representations tends to be forgotten in incremental learning? (2) How to facilitate the representation learning for incremental learning? 3.1.1 Analyzing Forgetting via Spectral Decomposition In what follows, we explore which part of feature representations tends to be forgotten and may not be transferable across different tasks in incremental learning. To this end, we propose to quantify the sensitivity of the model to different directions in the deep feature space by measuring the similarity of the space before and after learning new tasks. Formally, given a feature extractor fθ,old trained on dataset Dold = {(xi, yi)}ni=1. A new dataset Dnew that contains disjoint classes with Dold is used to update fθ,old, and the updated feature extractor is denoted as fθ,new. For the samples in Dold, we can get two groups of deep features mapped by fθ,old and fθ,new, respectively. Using eigenvalue decomposition, we could respectively decompose the features mapped by original feature extractor (i.e., fθ,old(xi)) as well as the features mapped by updated feature extractor (i.e., fθ,new(xi)) to different directions as following: 1 n n∑ i=1 fθ(xi)fθ(xi) T = d∑ j=1 ujλju T j , (2) where λj represents the eigenvalue with index j and uj is its eigenvector. d is the dimensionality of the feature space. Through spectral factorization in Eq. (2), we can represent the original and new representations with two groups of eigenvectors: {uold,1, ...,uold,d} and {unew,1, ...,unew,d}. Next, we investigate the forgetting or transferability of each direction. Shonkwiler [46] introduced the principal angles [47] to measure the similarity of two subspaces. However, it is unreasonable to treat all eigenvectors equally to calculate the principal angles, regardless of their relative eigenvalues. Inspired by [48], we use corresponding angles, denoted by ψ, to explore the distance between two subspaces in incremental learning: Definition 1 (Corresponding Angle) Given two groups of eigenvectors: {uold,1, ...,uold,d} and {unew,1, ...,unew,d}, corresponding angle represents the angle between two eigenvectors corresponding to the same eigenvalue value index. The cosine value of the corresponding angle is: cos(ψj) = 〈uold,j ,unew,j〉 ‖uold,j‖ · ‖unew,j‖ , (3) where uold,j is the j-th eigenvectors with the j-th largest eigenvalue in the old feature space, and similarly for unew,j . Note that ‖uold,j‖ = 1 and ‖unew,j‖ = 1. For IL, the meaning of “preserve old knowledge” refers to maintain the previously learned decision boundary among classes. At representation level, for an old class, the shape (i.e., covariance) of the distributions should not be changed too much. If an eigenvector direction only changes slightly after updating the feature extractor, the corresponding angle is small, and vice versa. Intuitively, the corresponding angle could capture the representation shift between the old and updated feature extractor during incremental learning, and reflect the forgetting along certain directions in the deep feature space. Based on the metric defined above, we explore the forgetting of different directions in Class-IL. We use LwF-MC [12, 13] as baseline method and train a ResNet-18 [1] on CIFAR-100 [49] using SGD in a 2-step manner. Concretely, the model is first trained on the first 50 classes and then updated on the other 50 classes. Figure 2 (a) shows the absolute cosine values of corresponding angles between the old and new eigenvectors. We can observe that eigenvectors with larger eigenvalues produce larger similarity (small corresponding angles), which indicates those directions are more transferable and less forgettable across different tasks. On the contrary, the eigenvectors with small eigenvalues prefer to move after updating the model on new tasks, and could be regarded as forgettable directions. Transferable and Diverse Representations. As demonstrated above, the directions with larger eigenvalues transfer better and suffer less forgetting. This thought-provoking observation indicates that our learned representations should have the following properties: (1) Transferability: the eigenvalues of those several significant directions should be enlarged to transfer across tasks (or classes). (2) Diversity: the number of the directions with significant eigenvalues should be increased. Note that those properties are different from that in the common single-task learning scenario. Actually, reducing the number of directions with significant variance has been seen as a form of feature compression [51], which is linked to generalization by information theory [52, 53]. However, the usual concepts of generalization may not entirely be appropriate for IL, since standard learning only aims to learn compact representations within training classes without considering new class generalizability. In IL, those less discriminative directions for the current task could capture useful representations for future tasks. A recent paper [54] has shown that strong compressed representations can actually hurt the generalization ability in the deep metric learning setting. Therefore, to reduce forgetting and enhance the transferability of the representations, it is important to enlarge the eigenvalues and increase the number of eigenvectors with significant variance. 3.1.2 Learning Representations via Class Augmentation We now exploit our above analysis to propose a simple method for representation learning in ClassIL. Our key idea is to learn transferable and diverse representations by learning more classes at each incremental stage t. To do so, a direct way is to introduce real classes from other datasets as auxiliary. However, it is unrealistic to always have access to other real classes, and which datasets should be used remains unknown. Therefore, we propose class augmentation (classAug) to augment the original classes by synthesizing auxiliary classes based on Dt. Concretely, inspired by Mixup [37], classAug randomly interpolates two samples xa and xb from two different classes a and b to generate a new sample xnewab representing a new class: xnewab = λxa + (1− λ)xb, (4) where λ is a random number of interpolation coefficient. For a k-class problem, we can generate k(k − 1)/2 new classes using the above method, which can be further merged to m auxiliary classes. As a result, the original k-class problem in the current task is extended to a (k +m)-class problem. Moreover, we restrict the λ to be sampled from the interval of [0.4, 0.6], to reduce the overlap between the augmented and original classes. At the end of each IL stage, the augmented class nodes in the classifier would be removed. Discussion. The proposed classAug is related to Mixup [37] which applies random interpolation on a pair of training samples and the respective one-hot labels. However, the interpolated samples in Mixup are near original data, and the number of classes is not changed, but in our method, it is increased. By learning to classify more classes in each stage t, the model could learn more transferable and diverse representations. Figure 2 (b) displays and compares the eigenvalues 2 of representations learned with different methods on the first 50 classes of CIFAR-100. It is obvious that the proposed classAug can enhance the value of eigenvalues significantly, and produce more directions with significant variance compared with other methods. On the contrary, Mixup and Label-Smoothing (LS) [50] lead to significantly smaller eigenvalues for the several top eigenvectors, which represent more compact representations. Indeed, the compression effect of soft-label based methods has also been demonstrated in [51, 50]. As shown in Section 4.3, classAug can improve the performance of Class-IL significantly, while Mixup and LS have negative effect in our experiments. 2To visualize the distribution clearly, we do not include the largest eigenvalue in the figure. 3.2 Learning Classifier with Semantic Augmentation As demonstrated in Section 1, classifier bias is another problem in Class-IL. When learning new classes, the previously learned decision boundary would suffer from catastrophic distortion and thus the test samples from old classes could be easily mapped to wrong classes. To overcome this issue, we propose semantic augmentation (semanAug), which leverages the distribution information (i.e., class mean and covariance) of old classes to regularize the learning of the classifier. Formally, for each old class k ∈ {1, ..., Cold}, we can generate M instances in the deep feature space from its distribution, i.e., z̃k v N (µk, γΣk), in which γ is a non-negative coefficient. Then the generated instances of old classes and real instances of new classes in the deep feature space can be jointly fed to the classifier for minimizing cross-entropy loss: Lt = 1 nt nt∑ i=1 −log ( eϕ T yi zi+byi∑Call c=1 e ϕTc zi+bc ) ︸ ︷︷ ︸ Lt,new: loss on real features of new classes + 1 Cold Cold∑ k=1 1 M M∑ m=1 −log ( eϕ T k z̃k,m+bk∑Call c=1 e ϕTc z̃k,m+bc ) ︸ ︷︷ ︸ Lt,old: loss on generated features of old classes , (5) where nt is the number of training samples in current task dataset Dt, Cold is the number of total old classes upon stage t, and Call = Cold + Ct is the number of all seen classes at stage t. ϕ = [ϕ1, ...,ϕCall ] T ∈ RCall×d and b = [b1, ..., bCall ]T ∈ RCall are the weight matrix and bias vector of the last fully connected layer, respectively. In Class-IL, the second term in Eq. (5), Lt,old, is computationally inefficient when M and Cold are large. In the following, we present an easy-to-compute way to implicitly generate infinite instances in the deep feature space for old classes. Upper bound of Lt,old. Concretely, in the case of M →∞, the second term in Eq. (5): Lt,old = 1 Cold Cold∑ k=1 Ez̃k [ −log ( eϕ T k z̃k+bk∑Call c=1 e ϕTc z̃k+bc )] = 1 Cold Cold∑ k=1 Ez̃k [ log (Call∑ c=1 e(ϕ T c −ϕ T k )z̃k+(bc−bk) )] 6 1 Cold Cold∑ k=1 log ( Ez̃k [Call∑ c=1 e(ϕ T c −ϕ T k )z̃k+(bc−bk) ]) = 1 Cold Cold∑ k=1 log (Call∑ c=1 ev T c,kµk+(bc−bk)+ γ 2 vTc,kΣkvc,k ) . (6) In above equation, vc,k = ϕc − ϕk. The inequality is based on Jensen’s inequality E[log(X)] 6 logE[X], and the last equality is obtained by using the moment-generating function E[etX ] = etµ+ 1 2σ 2t2 , X v N (µ, σ2), due to the fact that (ϕc − ϕk)z̃k + (bc − bk) is a Gaussian random variable. As can be seen, Eq. (6) is an upper bound of original Lt,old, which provides an elegant and much efficient way to implicitly generate infinite instances in the deep feature space for old classes. The Lt,old in Eq. (6) can be write in the common cross-entropy loss form: Lt,semanAug , Lt,old = 1 Cold Cold∑ k=1 −log ( eϕ T kµk+bk∑Call c=1 e ϕTc µk+bc+ γ 2 v T c,kΣkvc,k ) . (7) Intuitively, Lt,old implicitly performs semantic transformations for µk based on Σk. To maintain the decision boundary, γ should be smaller if the distribution of a class is near the decision boundary; instead, γ should be bigger if the distance is relatively far. We set γ = 2 in our experiments. In addition, we can observe that when γ = 0, only the class means are used for knowledge retention. Discussion. (1) Although the derivation of the upper bound in Eq. (6) is similar with ISDA [22], both our motivation and the way to leverage semanAug are different from ISDA. When learning new classes, we only apply semanAug for the class mean of each old class based on the memorized distribution information. While ISDA applies semanAug on all the training samples to improve generalization in standard supervised learning. In addition, a crucial step in ISDA is to estimate the mean and covariance matrix of each class in an online manner. Differently, semanAug is naturally suitable for Class-IL, since the distribution of old classes can be estimated with all training samples at the end of each learning stage. (2) Using previous class statistics for IL has also been explored in IL2M [55]. However, our method differs from IL2M in both the statistics information and the way to leverage them. First, The class statistics in IL2M is the prediction score of the classifier, while ours is the class distribution statistics in the deep feature space. Second, IL2M uses the class statistics to calibrate the prediction of a continual learner in a post-processing manner, while our method leverage the statistics to automatically learn a balanced classifier. 3.3 The Dual Augmentation Learning Framework With classAug for representation bias and semanAug for classifier bias, Figure 4 describes the learning process of the dual augmentation framework (IL2A). We also use the well-known knowledge distillation (KD) [19] for two reasons. Firstly, classAug and KD are complementary and focus on different aspect of learning representation. Secondly, KD can reduce the change of feature extractor, which is crucial for semanAug because it implicitly generate instances in the deep feature space from old distribution. The total learning objective at each stage t is as following: Lt = Lt,new + αLt,semanAug + βLt,kd, (8) where α and β are two hyper-parameters. Lt,new and Lt,semanAug are shown in Eq. (5) and Eq. (7), respectively. Lt,kd = 1nt ∑nt i=1 ‖fθt−1(xi) − fθt(xi)‖. Note that Lt,new and Lt,semanAug are applied to both the original and synthesized samples. Algorithm 1 presents the pseudo code of IL2A. 4 Experiments 4.1 Evaluation Protocol Algorithm 1: IL2A: Dual augmentation algorithm Randomly initialize Θ0 = {θ0,ϕ0}; S0 = ∅; foreach incremental stage t ∈ {1, ..., T } do Input: model Θt−1, data Dt = {(xi, yi)}nti=1; Output: model Θt; Θt ← Θt−1; Dt,aug = {(x′i, y′i)} n′t i=1 via classAug; add class nodes for augmented classes; if t = 1 then train Θt by minimizing L(gϕ(fθ(x′)), y′); else train Θt by minimizing Eq. (8); s← compute {µ,Σ} for each class in Dt; St ← St−1 ∪ s; remove augmented class nodes in classifier; Datasets. We perform our experiments on CIFAR-100 [49] and Tiny-ImageNet [56]. A common setting is to train the model on half of classes for first task, and equal classes in the remaining incremental steps. Based on this, we split the CIFAR-100 dataset in different settings: 50 + 5× 10, 50+10×5, 40+20×3. For instance, 50+ 10×5 represents that the first task contains 50 classes and there are 5 classes for the following 10 tasks. Similarly, the settings for Tiny-ImageNet are 100+5×20, 100+ 10×10 and 100+20×5. Intuitively, more classes in each tasks requires the model to learn a harder problem for each task, while increasing the length of the task sequence challenges the model’s retention. Implementation Details. In our experiments, we follow [44] to utilize the ResNet-18 [1] as our base architecture, and train it from scratch in each experiment. All models are trained using Adam [57] optimizer with an initial learning rate of 0.001 for 100 epochs with the mini-batch size of 64. The learning rate is reduced by a factor of 10 at 45 and 90 epochs. We use the same hyper-parameter value for all experiments. Specifically, we set α = 10 and β = 10 in Eq. (8). The number of augmented classes (i.e. The number of augmented classes (i.e., m) depends on the number of (original) classes at current incremental step. Taking CIFAR-100 as an example, the m is 45 for 5 phases setting where each incremental step has 10 classes; and m is 10 for 10 phases setting where each incremental step has 5 classes. At the end of each incremental stage, we evaluate the model on all seen classes after removing the class nodes of the m augmented classes in the classifier. Our code is available at https://github.com/Impression2805/IL2A. Comparison Methods. Our method (IL2A) does not store any old samples for replay when learning new classes. Therefore, we first compare IL2A with several non-exemplar based approaches: MAS [16], LwF-MC [13], MUC [58], LwM [59]. In addition, we also compare with several exemplar based methods such as iCaRL [13], EEIL [18] and LUCIR [19]. Specifically, for the data replay based methods, we follow [13, 19] to store 20 samples for each class using ‘herd’ selection technique [13]. We report the average top-1 accuracy of all previously seen classes up to each incremental step t. For iCaRL, we respectively report its results of CNN predictions and nearest-mean-of-exemplars classification, denoted as iCaRL-CNN and iCaRL-NME. 4.2 Experimental Results Main Results. Comparative results are shown in Figure 5. Firstly, we observe that our method performs much better than non-exemplar based methods such as LwF-MC and MUC in the trend of accuracy curve under different settings. Particularly, the gap appears unbridgeable in the long-step Class-IL setting, e.g., 10 phases and 20 phases. This suggests that only constraining old parameters does not suffice to prevent forgetting. We argue that this is partly due to the unaddressed classifier bias. When compared to representative data replay based methods such as iCaRL, EEIL and LUCIR, our method remarkably shows strong performance without storing old samples. The success of our method can contribute to the proposed classAug and semanAug. Specifically, classAug is applied to new classes of current task, which enables the model to learn more transferable and diverse representations for future classes and in turn, reduces the forgetting of old parameters when learning new classes. While semanAug is applied to old classes of previous tasks, which leverage the valuable distribution information of old classes to learn a unified classifier to connect the classes from different tasks to each other. Ablation Study. To evaluate the effect of each component in IL2A, we perform the ablation study and show the results of 10 phases setting (CIFAR-100) in Table 1. Specifically, the baseline denotes the method that does not generate pseudo-instance using semanAug, but only replays the class-mean of each old class when training new classes. By doing so, we aim to validate the effectiveness of semanAug compared with only replaying class-mean. In summary, we can observe that: (1) Baseline improves the performance of KD significantly. (2) SemanAug improves the performance of baseline from 34.71% to 42.09%. Those results indicate the effect of the distribution information for maintaining old knowledge in Class-IL. (3) ClassAug also has remarkably effect on baseline, and (4) the performance can be further improved by combing with semanAug, which indicates that those two modules are complementary. Similar results are observed in other settings of CIFAR-100 and Tiny-ImageNet datasets. (5) As for the computational complexity, classAug involves input level sample mixing and the augmented samples are fed to feature extractor. Differently, semanAug performs implicit old instance generation in the deep feature space. Therefore, semanAug is cheaper compared with classAug from the computation perspective. 4.3 Further Analysis ClassAug Improves both Plasticity and Stability in Class-IL. To analyze the effectiveness of classAug more concretely, we explore how it affects the new tasks accuracy (↑) and average forgetting (↓) (CIFAR-100, 10 phases setting). Average forgetting [60] is defined to estimate the forgetting of previous tasks. The forgetting measure f ik of the i-th task after training k-th task is defined as f ik = max t∈1,...,k−1 (at,i − ak,i),∀i < k, in which am,n is the accuracy of task n after training task m. The average forgetting measure Fk is then defined as Fk = 1k−1 ∑k−1 i=1 f i k. Intuitively, new task accuracy can be viewed as the plasticity of the incremental learner and the average forgetting can be viewed as the stability of the incremental learner. Figure 6 (a) and (b) report the results, from which we see that classAug simultaneously improves the new task accuracy and reduces the average forgetting. Specifically, the significant improvement on new task accuracy implies that the model training with classAug is a good initialization for the following tasks. Consequently, classAug is effective to improve the trade-off between plasticity and stability of a continual learner. Compare ClassAug with Other Regularizers. We compare the proposed classAug with Mixup and LS in Figure 6 (c), where the baseline (with semanAug) represents our IL2A without using classAug. As can be seen, Mixup and LS have negative effect on the final accuracy. This phenomenon could be interpreted based on the analysis in Section 3.1.1 and Figure 2 (b). Specifically, those regularizers result in more compressed representations, damaging the transferability of the representations. Besides, the label smoothing strategy also affects the weights of old classes in the classifier, thus increasing the classifier bias. Similar results have also been reported in [61]. Discussion of Covariance Matrix. In our main experiments, we use the original covariance matrix for semanAug. However, storing the original covariance matrix might be inefficient when the matrix dimension is large. An alternative way is to only store the elements on the diagonal, which could greatly reduce the cost of memory. Figure 7 also reports the results of using the diagonal covariance matrix. Under different settings, using the original covariance matrix is slightly better than the diagonal form. This is reasonable because the original covariance matrix stores more distribution information of old classes. However, using the diagonal covariance matrix would be more memory-efficient in practice. ClassAug Improves Confidence Reliability. During continuous use of a machine learning system in open-world applications, there are mainly three key steps [62]. The first step is out-of-distribution (OOD) detection [63], which requires the system to detect unknown samples from novel classes. The second step is to label the collected unknown samples by humans or automatic algorithms [64]. Finally, the system must scale and adapt incrementally to learn the novel classes, which is the Class-IL problem studied in this paper. Recently studies found that DNNs are overconfident for their predictions [63, 65], lacking the ability to detect samples from unknown classes. In real-world applications, we expect a continual learner has good OOD detection ability. We explore the OOD detection ability of the proposed classAug. Concretely, we train a ResNet-18 on CIFAR-10, and the test samples from CIFAR-10 are in-distribution. For OOD examples, we test on MNIST [66], Fashion-MNIST [67], LSUN (resized) [68] and Tiny-ImageNet (resized). As shown in Table 2, classAug noticeably improves the OOD detection performance of baseline [63] on commonly used metrics such as AUROC, AUPR-In and AUPR-Out [63]. By recognizing synthetic samples, DNNs could learn more robust and transferable representations which could be generalized to OOD samples. Moreover, as shown in Table 2, Mixup sometimes damages the performance of OOD detection, which further demonstrates the superiority of classAug. 5 Conclusion In this paper, we propose a simple and effective dual augmentation framework to address the representation bias and classifier bias in Class-IL. We first investigate the transferability (or forgetting) of representations via spectral decomposition, which motivates us to propose classAug that can learn transferable, diverse and less compact representations for IL. Furthermore, we propose to use semanAug to implicitly generate infinite instances of old classes in the deep feature space during jointly learning of the unified classifier. Experiments show that our method could achieve remarkable performance compared with state-of-the-art Class-IL methods. Future works will consider the dual augmentation framework for more challenging scenarios like Class-IL with distribution shift and OOD data, few-shot Class-IL, and federated incremental learning. Acknowledgements This work has been supported by the National Key Research and Development Program under Grant No. 2018AAA0100400, the National Natural Science Foundation of China (NSFC) grants U20A20223, 61633021, 62076236, 61721004, the Key Research Program of Frontier Sciences of CAS under Grant ZDBS-LY-7004, and the Youth Innovation Promotion Association of CAS under Grant 2019141.
1. What is the main contribution of the paper regarding incremental image classification? 2. What are the strengths of the proposed ClassAug and SemanticAug techniques? 3. What are the weaknesses of the paper, particularly in terms of writing clarity and technical exposition? 4. Do you have any questions regarding the similarity between SemanticAug and ISDA, or the lack of introduction to certain concepts in the paper? 5. Are there any concerns about the experimental setup or results presented in the paper?
Summary Of The Paper Review
Summary Of The Paper This work tackles incremental image classification problem, and proposes two techniques, ClassAug and SemanticAug, to improve the overall accuracy over all classes, including old and new classes. Results on two benchmarks, including CIFAR-100 and Tiny-ImageNet, are presented to demonstrate the superior performance. Review Strength - The method does not require to store example of old classes, and its performance is better than non-exemplar based methods and close to exemplar-based methods on two benchmarks. - ClassAug seems to be an original idea with supporting evidence from spectral analysis on primary eigenvector components in the per-class features Weakness - One issue I find with the paper writing is this paper is not self-contained. In Table 1, a minimal introduction to iCaRL and CCIL is not missing, which makes it difficult to get the idea in section 3.1 before reviewing prior papers. Similarly, L169, minimal introduction to LwF-MC is not included. - The proposed SemanticAug seems to be an incremental contribution as it resembles ISDA [24] a lot. While the discussions at L255 are valid, the adaptation of ISDA idea in incremental classification seems to be straight-forward. - Some technical exposition is incomplete. o Fig 2 (b), when data augmentation is used, such as mixup, cutmix, it often requires more training epochs to learn a better model. Can you clarify whether results with data augmentation in Fig 2 (b) are obtained with more training epochs? o L217, how many augmented classes (i.e. m) are added? Any data sampling technique is used to balance original new classes and augmented new classes? how many images are generated for each augmented class? o L239, for each old class, do you fix the M deep features once they are generated from a normal distribution with fixed mean/covariance? Since backbone is also updated as more new tasks are added, the mean/covariance will be outdated. o L292, what is “herd” selection technique?
NIPS
Title Class-Incremental Learning via Dual Augmentation Abstract Deep learning systems typically suffer from catastrophic forgetting of past knowledge when acquiring new skills continually. In this paper, we emphasize two dilemmas, representation bias and classifier bias in class-incremental learning, and present a simple and novel approach that employs explicit class augmentation (classAug) and implicit semantic augmentation (semanAug) to address the two biases, respectively. On the one hand, we propose to address the representation bias by learning transferable and diverse representations. Specifically, we investigate the feature representations in incremental learning based on spectral analysis and present a simple technique called classAug, to let the model see more classes during training for learning representations transferable across classes. On the other hand, to overcome the classifier bias, semanAug implicitly involves the simultaneous generating of an infinite number of instances of old classes in the deep feature space, which poses tighter constraints to maintain the decision boundary of previously learned classes. Without storing any old samples, our method can perform comparably with representative data replay based approaches. 1 Introduction Deep neural networks (DNNs) have enabled great success in many machine learning tasks, based on stationary, large-scale, computationally expensive, and memory-intensive training data [1, 2, 3]. Yet the need of the ability to acquire sequential experience in dynamic and open environments [4, 5, 6] poses a serious challenge to modern deep learning systems, which only perform well on homogenized, balanced, and shuffled data [7]. Typically, DNNs suffer from drastic performance degradation of previously learned tasks after learning new knowledge, which is a well-documented phenomenon, known as catastrophic forgetting [8, 9, 10]. Recently, incremental learning (IL), also referred to as lifelong learning or continual learning, has received extensive attention [11, 12, 13, 14] to enable DNNs to preserve and extend knowledge continually. Many earlier studies focus on task-incremental learning, which uses separate output layers for different tasks, and needs the task identity for inference [11, 15, 16]. In this work, we consider a more realistic and challenging setting of class-incremental learning (Class-IL), where the model only has access to data of new classes at each stage and needs to learn a unified classifier that can classify all seen classes [13, 17, 18]. Unfortunately, the learning paradigm of Class-IL will lead to two problems: representation bias and classifier bias, as shown in Figure 1. First, for representation learning, if the feature extractor is fixed after learning old classes, the learned representations could be preserved, but suffer from the lack of transferability for new classes; on the contrary, if we update the feature extractor on new classes, the updated representations would be no longer suitable for old classes. Consequently, the old and new classes would be easily overlapped in the deep feature space. We ∗Corresponding author. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). denote this dilemma as the representation bias. Second, to distinguish new classes from old classes, the training loss is typically calculated on all classes. Without old training data, the class weights of old classes would be ill-updated and mismatched with the updated representation space. We denote this dilemma as the classifier bias. In this work, we investigate the learning of representation and classifier in incremental learning and propose a simple and effective dual augmentation framework to overcome these two biases in Class-IL without storing and replaying training data of old classes. Learning Representation for Incremental Learning. Existing works typically regularize network parameters explicitly [11, 15, 16] or implicitly [12] to reduce the representation shift when learning new classes. In this paper, instead of asking how to keep previously learned representations unchanged, we investigate the following question: What properties of learned representations could facilitate incremental learning? We hypothesize that learning transferable and diverse representations is an important requirement for incremental learning. Intuitively, with such representations, it could be easier to find a model to perform well on all tasks and improve both plasticity and stability, since different tasks would be closer in the parameters space. From a spectral analysis viewpoint, we investigate which components of feature representations are more transferable and less forgettable in the incremental learning process. It is found that spectral components with large eigenvalues are less forgettable. Furthermore, we exploit this finding to propose a simple technique named classAug, which can enlarge the spectral components to introduce more diverse and transferable representations for incremental learning. Learning Classifier for Incremental Learning. Recently, several works were proposed to alleviate the classifier bias in data replay based methods [18, 19, 20]. However, in non-exemplar based (i.e., without storing and replaying old data) Class-IL setting, the classifier bias is more serious and the above methods can not be directly used. A straightforward way is storing instances of old classes in the deep feature space. However, this strategy is undesirable due to the limited memory resource and scalability. This work delves into the classifier learning for Class-IL and proposes an implicit semantic augmentation (semanAug) approach to generate an infinite number of instances of old classes in the deep feature space by leveraging the distribution information. SemanAug is inspired by MCF [21] and ISDA [22], which have performed semantic augmentation for linear models and DNNs, respectively. However, both our way to leverage semantic augmentation and the motivation fundamentally differ from them [21, 22]. Contributions. (i) We provide new insights into the representation learning in incremental learning by analyzing the structural characteristics of the learned embedding space via spectral decomposition and find that spectral components with large eigenvalues are less forgettable and carry more transferable features. Based on this observation, we propose a simple and effective method of classAug to learn better embedding space for incremental learning. (ii) For classifier learning in incremental learning, we propose semanAug which implicitly involves simultaneous generating an infinite number of instances of old classes in the deep feature space to maintain the decision boundary of previously learned classes. (iii) Extensive experiments on benchmark datasets demonstrate the superior performance of our dual augmentation framework for the challenging scenario of Class-IL. 2 Related Work Incremental Learning. Diverse approaches have been proposed for incremental learning of DNNs. They can be roughly divided into three categories: regularization based, data replay based, and architecture based approaches. Regularization based methods focus on weight regularization by estimating and preventing the important network weights from changing [11, 15, 16]. The difference among those methods is the way to compute the importance of the parameters. However, it is hard to design a reasonable metric to measure the importance of parameters, and it is known that regularization strategies show poor performance in Class-IL scenario [23, 24]. Data replay based methods address both the representation bias and classifier bias straightforwardly by storing a fraction of old data to jointly train the model with current data. With stored real samples, some works [17, 13, 25] use a distillation loss to prevent forgetting, while others [26, 27, 28] develop gradient-based regularization to make more efficient use of the rehearsal data. To avoid storing real data, another line of works generates pseudo-samples of all previous classes for replay using deep generative models [29, 30, 31, 32]. Nevertheless, storing real data is undesirable for resource-limited or privacy and safety concerning scenarios. Moreover, training big generative models for complex datasets is inefficient. Architecture based methods dynamically extend the network structure during the course of incremental learning [33, 34, 35, 36]. However, growing architecture is unfeasible for large numbers of tasks, and those methods are often impractical for Class-IL. Data Augmentation. Literature is rich on data augmentation for improving the generalization of DNNs. Classical strategies commonly synthesize “positive” new samples in a way that is consistent with the underlying data distribution of the original dataset [3]. Recent works show that label mixing based methods such as Mixup [37] and Cutmix [38] can greatly improve the generalization of DNNs. In complement to the input space augmentations mentioned above, some works have explored feature space augmentations which augment the learned representations in deep embedding space to enhance classifier performance. The intuition behind those works is that certain directions in the deep feature space correspond to meaningful semantic transformations [39, 40]. For instance, deep feature interpolation [40] leverages simple interpolations in the embedding space to achieve semantic augmentation. A recently proposed ISDA [22] performs semantic augmentation by estimating and leveraging the category-wise distribution of deep representations in an online manner. Despite the simplicity, ISDA shows its effectiveness in semi-supervised learning [22], contrastive learning [41], domain adaptation [42] and long-tailed recognition [43]. 3 Dual Augmentation Framework for Class-Incremental Learning We first formalize the problem of Class-IL, and then introduce the proposed classAug for representation learning and semanAug for classifier learning, respectively. Finally, we present the dual augmentation framework for Class-IL by combing the two augmentations. Problem Definition. Typically, a Class-IL problem involves the sequential learning of T tasks that consist of disjoint classes sets, and the model has to classify all seen classes at any given point in training. At incremental step t ∈ {1, ..., T }, (x, y) ∈ Dt denotes a training sample, where x is an sample in the input space X and y ∈ Ct is its corresponding label. Ct is the class set of task t. To facilitate analysis, we represent the DNN based model with two components: a feature extractor and a unified classifier. Specifically, the feature extractor fθ : X → Z , parameterized by θ, maps the input x into a feature vector z = fθ(x) ∈ Rd in the deep feature space Z; the unified classifier gϕ : Z → RC1:t , parameterized by ϕ, produces a probability distribution gϕ(z) as the prediction for x. Denote the overall parameters by Θ = (θ,ϕ). The general objective is to correctly classify test examples from all seen classes [44]. The key challenge of Class-IL is that data from previous tasks are assumed to be unavailable, which means that the best configuration of the model for all seen tasks must be sought by minimizing the predefined loss function L (e.g., cross-entropy) on current data Dt: argmin θ,ϕ E(x,y)vDt [L(gϕ(fθ(x)), y)]. (1) A widely used strategy to preserve old knowledge is knowledge distillation [45], which typically matches the current model with previous model response to current training data using the teacherstudent framework [12, 13, 19]. 3.1 Learning Representation with Class Augmentation As we focus on non-exemplar based Class-IL, we intentionally avoid storing training samples of old classes. To maintain the generalizability of the learned representations for old classes, existing methods typically restrain the feature extractor from changing [11, 15, 16, 12]. However, this would lead to a trade-off between the plasticity and stability [5], and it would be hard to perform long-step incremental learning. Our high-level idea is to learn transferable and diverse representations to bridge the old and new classes in a better feature space. To delve into this problem, we want to answer two questions: (1) Which part of feature representations tends to be forgotten in incremental learning? (2) How to facilitate the representation learning for incremental learning? 3.1.1 Analyzing Forgetting via Spectral Decomposition In what follows, we explore which part of feature representations tends to be forgotten and may not be transferable across different tasks in incremental learning. To this end, we propose to quantify the sensitivity of the model to different directions in the deep feature space by measuring the similarity of the space before and after learning new tasks. Formally, given a feature extractor fθ,old trained on dataset Dold = {(xi, yi)}ni=1. A new dataset Dnew that contains disjoint classes with Dold is used to update fθ,old, and the updated feature extractor is denoted as fθ,new. For the samples in Dold, we can get two groups of deep features mapped by fθ,old and fθ,new, respectively. Using eigenvalue decomposition, we could respectively decompose the features mapped by original feature extractor (i.e., fθ,old(xi)) as well as the features mapped by updated feature extractor (i.e., fθ,new(xi)) to different directions as following: 1 n n∑ i=1 fθ(xi)fθ(xi) T = d∑ j=1 ujλju T j , (2) where λj represents the eigenvalue with index j and uj is its eigenvector. d is the dimensionality of the feature space. Through spectral factorization in Eq. (2), we can represent the original and new representations with two groups of eigenvectors: {uold,1, ...,uold,d} and {unew,1, ...,unew,d}. Next, we investigate the forgetting or transferability of each direction. Shonkwiler [46] introduced the principal angles [47] to measure the similarity of two subspaces. However, it is unreasonable to treat all eigenvectors equally to calculate the principal angles, regardless of their relative eigenvalues. Inspired by [48], we use corresponding angles, denoted by ψ, to explore the distance between two subspaces in incremental learning: Definition 1 (Corresponding Angle) Given two groups of eigenvectors: {uold,1, ...,uold,d} and {unew,1, ...,unew,d}, corresponding angle represents the angle between two eigenvectors corresponding to the same eigenvalue value index. The cosine value of the corresponding angle is: cos(ψj) = 〈uold,j ,unew,j〉 ‖uold,j‖ · ‖unew,j‖ , (3) where uold,j is the j-th eigenvectors with the j-th largest eigenvalue in the old feature space, and similarly for unew,j . Note that ‖uold,j‖ = 1 and ‖unew,j‖ = 1. For IL, the meaning of “preserve old knowledge” refers to maintain the previously learned decision boundary among classes. At representation level, for an old class, the shape (i.e., covariance) of the distributions should not be changed too much. If an eigenvector direction only changes slightly after updating the feature extractor, the corresponding angle is small, and vice versa. Intuitively, the corresponding angle could capture the representation shift between the old and updated feature extractor during incremental learning, and reflect the forgetting along certain directions in the deep feature space. Based on the metric defined above, we explore the forgetting of different directions in Class-IL. We use LwF-MC [12, 13] as baseline method and train a ResNet-18 [1] on CIFAR-100 [49] using SGD in a 2-step manner. Concretely, the model is first trained on the first 50 classes and then updated on the other 50 classes. Figure 2 (a) shows the absolute cosine values of corresponding angles between the old and new eigenvectors. We can observe that eigenvectors with larger eigenvalues produce larger similarity (small corresponding angles), which indicates those directions are more transferable and less forgettable across different tasks. On the contrary, the eigenvectors with small eigenvalues prefer to move after updating the model on new tasks, and could be regarded as forgettable directions. Transferable and Diverse Representations. As demonstrated above, the directions with larger eigenvalues transfer better and suffer less forgetting. This thought-provoking observation indicates that our learned representations should have the following properties: (1) Transferability: the eigenvalues of those several significant directions should be enlarged to transfer across tasks (or classes). (2) Diversity: the number of the directions with significant eigenvalues should be increased. Note that those properties are different from that in the common single-task learning scenario. Actually, reducing the number of directions with significant variance has been seen as a form of feature compression [51], which is linked to generalization by information theory [52, 53]. However, the usual concepts of generalization may not entirely be appropriate for IL, since standard learning only aims to learn compact representations within training classes without considering new class generalizability. In IL, those less discriminative directions for the current task could capture useful representations for future tasks. A recent paper [54] has shown that strong compressed representations can actually hurt the generalization ability in the deep metric learning setting. Therefore, to reduce forgetting and enhance the transferability of the representations, it is important to enlarge the eigenvalues and increase the number of eigenvectors with significant variance. 3.1.2 Learning Representations via Class Augmentation We now exploit our above analysis to propose a simple method for representation learning in ClassIL. Our key idea is to learn transferable and diverse representations by learning more classes at each incremental stage t. To do so, a direct way is to introduce real classes from other datasets as auxiliary. However, it is unrealistic to always have access to other real classes, and which datasets should be used remains unknown. Therefore, we propose class augmentation (classAug) to augment the original classes by synthesizing auxiliary classes based on Dt. Concretely, inspired by Mixup [37], classAug randomly interpolates two samples xa and xb from two different classes a and b to generate a new sample xnewab representing a new class: xnewab = λxa + (1− λ)xb, (4) where λ is a random number of interpolation coefficient. For a k-class problem, we can generate k(k − 1)/2 new classes using the above method, which can be further merged to m auxiliary classes. As a result, the original k-class problem in the current task is extended to a (k +m)-class problem. Moreover, we restrict the λ to be sampled from the interval of [0.4, 0.6], to reduce the overlap between the augmented and original classes. At the end of each IL stage, the augmented class nodes in the classifier would be removed. Discussion. The proposed classAug is related to Mixup [37] which applies random interpolation on a pair of training samples and the respective one-hot labels. However, the interpolated samples in Mixup are near original data, and the number of classes is not changed, but in our method, it is increased. By learning to classify more classes in each stage t, the model could learn more transferable and diverse representations. Figure 2 (b) displays and compares the eigenvalues 2 of representations learned with different methods on the first 50 classes of CIFAR-100. It is obvious that the proposed classAug can enhance the value of eigenvalues significantly, and produce more directions with significant variance compared with other methods. On the contrary, Mixup and Label-Smoothing (LS) [50] lead to significantly smaller eigenvalues for the several top eigenvectors, which represent more compact representations. Indeed, the compression effect of soft-label based methods has also been demonstrated in [51, 50]. As shown in Section 4.3, classAug can improve the performance of Class-IL significantly, while Mixup and LS have negative effect in our experiments. 2To visualize the distribution clearly, we do not include the largest eigenvalue in the figure. 3.2 Learning Classifier with Semantic Augmentation As demonstrated in Section 1, classifier bias is another problem in Class-IL. When learning new classes, the previously learned decision boundary would suffer from catastrophic distortion and thus the test samples from old classes could be easily mapped to wrong classes. To overcome this issue, we propose semantic augmentation (semanAug), which leverages the distribution information (i.e., class mean and covariance) of old classes to regularize the learning of the classifier. Formally, for each old class k ∈ {1, ..., Cold}, we can generate M instances in the deep feature space from its distribution, i.e., z̃k v N (µk, γΣk), in which γ is a non-negative coefficient. Then the generated instances of old classes and real instances of new classes in the deep feature space can be jointly fed to the classifier for minimizing cross-entropy loss: Lt = 1 nt nt∑ i=1 −log ( eϕ T yi zi+byi∑Call c=1 e ϕTc zi+bc ) ︸ ︷︷ ︸ Lt,new: loss on real features of new classes + 1 Cold Cold∑ k=1 1 M M∑ m=1 −log ( eϕ T k z̃k,m+bk∑Call c=1 e ϕTc z̃k,m+bc ) ︸ ︷︷ ︸ Lt,old: loss on generated features of old classes , (5) where nt is the number of training samples in current task dataset Dt, Cold is the number of total old classes upon stage t, and Call = Cold + Ct is the number of all seen classes at stage t. ϕ = [ϕ1, ...,ϕCall ] T ∈ RCall×d and b = [b1, ..., bCall ]T ∈ RCall are the weight matrix and bias vector of the last fully connected layer, respectively. In Class-IL, the second term in Eq. (5), Lt,old, is computationally inefficient when M and Cold are large. In the following, we present an easy-to-compute way to implicitly generate infinite instances in the deep feature space for old classes. Upper bound of Lt,old. Concretely, in the case of M →∞, the second term in Eq. (5): Lt,old = 1 Cold Cold∑ k=1 Ez̃k [ −log ( eϕ T k z̃k+bk∑Call c=1 e ϕTc z̃k+bc )] = 1 Cold Cold∑ k=1 Ez̃k [ log (Call∑ c=1 e(ϕ T c −ϕ T k )z̃k+(bc−bk) )] 6 1 Cold Cold∑ k=1 log ( Ez̃k [Call∑ c=1 e(ϕ T c −ϕ T k )z̃k+(bc−bk) ]) = 1 Cold Cold∑ k=1 log (Call∑ c=1 ev T c,kµk+(bc−bk)+ γ 2 vTc,kΣkvc,k ) . (6) In above equation, vc,k = ϕc − ϕk. The inequality is based on Jensen’s inequality E[log(X)] 6 logE[X], and the last equality is obtained by using the moment-generating function E[etX ] = etµ+ 1 2σ 2t2 , X v N (µ, σ2), due to the fact that (ϕc − ϕk)z̃k + (bc − bk) is a Gaussian random variable. As can be seen, Eq. (6) is an upper bound of original Lt,old, which provides an elegant and much efficient way to implicitly generate infinite instances in the deep feature space for old classes. The Lt,old in Eq. (6) can be write in the common cross-entropy loss form: Lt,semanAug , Lt,old = 1 Cold Cold∑ k=1 −log ( eϕ T kµk+bk∑Call c=1 e ϕTc µk+bc+ γ 2 v T c,kΣkvc,k ) . (7) Intuitively, Lt,old implicitly performs semantic transformations for µk based on Σk. To maintain the decision boundary, γ should be smaller if the distribution of a class is near the decision boundary; instead, γ should be bigger if the distance is relatively far. We set γ = 2 in our experiments. In addition, we can observe that when γ = 0, only the class means are used for knowledge retention. Discussion. (1) Although the derivation of the upper bound in Eq. (6) is similar with ISDA [22], both our motivation and the way to leverage semanAug are different from ISDA. When learning new classes, we only apply semanAug for the class mean of each old class based on the memorized distribution information. While ISDA applies semanAug on all the training samples to improve generalization in standard supervised learning. In addition, a crucial step in ISDA is to estimate the mean and covariance matrix of each class in an online manner. Differently, semanAug is naturally suitable for Class-IL, since the distribution of old classes can be estimated with all training samples at the end of each learning stage. (2) Using previous class statistics for IL has also been explored in IL2M [55]. However, our method differs from IL2M in both the statistics information and the way to leverage them. First, The class statistics in IL2M is the prediction score of the classifier, while ours is the class distribution statistics in the deep feature space. Second, IL2M uses the class statistics to calibrate the prediction of a continual learner in a post-processing manner, while our method leverage the statistics to automatically learn a balanced classifier. 3.3 The Dual Augmentation Learning Framework With classAug for representation bias and semanAug for classifier bias, Figure 4 describes the learning process of the dual augmentation framework (IL2A). We also use the well-known knowledge distillation (KD) [19] for two reasons. Firstly, classAug and KD are complementary and focus on different aspect of learning representation. Secondly, KD can reduce the change of feature extractor, which is crucial for semanAug because it implicitly generate instances in the deep feature space from old distribution. The total learning objective at each stage t is as following: Lt = Lt,new + αLt,semanAug + βLt,kd, (8) where α and β are two hyper-parameters. Lt,new and Lt,semanAug are shown in Eq. (5) and Eq. (7), respectively. Lt,kd = 1nt ∑nt i=1 ‖fθt−1(xi) − fθt(xi)‖. Note that Lt,new and Lt,semanAug are applied to both the original and synthesized samples. Algorithm 1 presents the pseudo code of IL2A. 4 Experiments 4.1 Evaluation Protocol Algorithm 1: IL2A: Dual augmentation algorithm Randomly initialize Θ0 = {θ0,ϕ0}; S0 = ∅; foreach incremental stage t ∈ {1, ..., T } do Input: model Θt−1, data Dt = {(xi, yi)}nti=1; Output: model Θt; Θt ← Θt−1; Dt,aug = {(x′i, y′i)} n′t i=1 via classAug; add class nodes for augmented classes; if t = 1 then train Θt by minimizing L(gϕ(fθ(x′)), y′); else train Θt by minimizing Eq. (8); s← compute {µ,Σ} for each class in Dt; St ← St−1 ∪ s; remove augmented class nodes in classifier; Datasets. We perform our experiments on CIFAR-100 [49] and Tiny-ImageNet [56]. A common setting is to train the model on half of classes for first task, and equal classes in the remaining incremental steps. Based on this, we split the CIFAR-100 dataset in different settings: 50 + 5× 10, 50+10×5, 40+20×3. For instance, 50+ 10×5 represents that the first task contains 50 classes and there are 5 classes for the following 10 tasks. Similarly, the settings for Tiny-ImageNet are 100+5×20, 100+ 10×10 and 100+20×5. Intuitively, more classes in each tasks requires the model to learn a harder problem for each task, while increasing the length of the task sequence challenges the model’s retention. Implementation Details. In our experiments, we follow [44] to utilize the ResNet-18 [1] as our base architecture, and train it from scratch in each experiment. All models are trained using Adam [57] optimizer with an initial learning rate of 0.001 for 100 epochs with the mini-batch size of 64. The learning rate is reduced by a factor of 10 at 45 and 90 epochs. We use the same hyper-parameter value for all experiments. Specifically, we set α = 10 and β = 10 in Eq. (8). The number of augmented classes (i.e. The number of augmented classes (i.e., m) depends on the number of (original) classes at current incremental step. Taking CIFAR-100 as an example, the m is 45 for 5 phases setting where each incremental step has 10 classes; and m is 10 for 10 phases setting where each incremental step has 5 classes. At the end of each incremental stage, we evaluate the model on all seen classes after removing the class nodes of the m augmented classes in the classifier. Our code is available at https://github.com/Impression2805/IL2A. Comparison Methods. Our method (IL2A) does not store any old samples for replay when learning new classes. Therefore, we first compare IL2A with several non-exemplar based approaches: MAS [16], LwF-MC [13], MUC [58], LwM [59]. In addition, we also compare with several exemplar based methods such as iCaRL [13], EEIL [18] and LUCIR [19]. Specifically, for the data replay based methods, we follow [13, 19] to store 20 samples for each class using ‘herd’ selection technique [13]. We report the average top-1 accuracy of all previously seen classes up to each incremental step t. For iCaRL, we respectively report its results of CNN predictions and nearest-mean-of-exemplars classification, denoted as iCaRL-CNN and iCaRL-NME. 4.2 Experimental Results Main Results. Comparative results are shown in Figure 5. Firstly, we observe that our method performs much better than non-exemplar based methods such as LwF-MC and MUC in the trend of accuracy curve under different settings. Particularly, the gap appears unbridgeable in the long-step Class-IL setting, e.g., 10 phases and 20 phases. This suggests that only constraining old parameters does not suffice to prevent forgetting. We argue that this is partly due to the unaddressed classifier bias. When compared to representative data replay based methods such as iCaRL, EEIL and LUCIR, our method remarkably shows strong performance without storing old samples. The success of our method can contribute to the proposed classAug and semanAug. Specifically, classAug is applied to new classes of current task, which enables the model to learn more transferable and diverse representations for future classes and in turn, reduces the forgetting of old parameters when learning new classes. While semanAug is applied to old classes of previous tasks, which leverage the valuable distribution information of old classes to learn a unified classifier to connect the classes from different tasks to each other. Ablation Study. To evaluate the effect of each component in IL2A, we perform the ablation study and show the results of 10 phases setting (CIFAR-100) in Table 1. Specifically, the baseline denotes the method that does not generate pseudo-instance using semanAug, but only replays the class-mean of each old class when training new classes. By doing so, we aim to validate the effectiveness of semanAug compared with only replaying class-mean. In summary, we can observe that: (1) Baseline improves the performance of KD significantly. (2) SemanAug improves the performance of baseline from 34.71% to 42.09%. Those results indicate the effect of the distribution information for maintaining old knowledge in Class-IL. (3) ClassAug also has remarkably effect on baseline, and (4) the performance can be further improved by combing with semanAug, which indicates that those two modules are complementary. Similar results are observed in other settings of CIFAR-100 and Tiny-ImageNet datasets. (5) As for the computational complexity, classAug involves input level sample mixing and the augmented samples are fed to feature extractor. Differently, semanAug performs implicit old instance generation in the deep feature space. Therefore, semanAug is cheaper compared with classAug from the computation perspective. 4.3 Further Analysis ClassAug Improves both Plasticity and Stability in Class-IL. To analyze the effectiveness of classAug more concretely, we explore how it affects the new tasks accuracy (↑) and average forgetting (↓) (CIFAR-100, 10 phases setting). Average forgetting [60] is defined to estimate the forgetting of previous tasks. The forgetting measure f ik of the i-th task after training k-th task is defined as f ik = max t∈1,...,k−1 (at,i − ak,i),∀i < k, in which am,n is the accuracy of task n after training task m. The average forgetting measure Fk is then defined as Fk = 1k−1 ∑k−1 i=1 f i k. Intuitively, new task accuracy can be viewed as the plasticity of the incremental learner and the average forgetting can be viewed as the stability of the incremental learner. Figure 6 (a) and (b) report the results, from which we see that classAug simultaneously improves the new task accuracy and reduces the average forgetting. Specifically, the significant improvement on new task accuracy implies that the model training with classAug is a good initialization for the following tasks. Consequently, classAug is effective to improve the trade-off between plasticity and stability of a continual learner. Compare ClassAug with Other Regularizers. We compare the proposed classAug with Mixup and LS in Figure 6 (c), where the baseline (with semanAug) represents our IL2A without using classAug. As can be seen, Mixup and LS have negative effect on the final accuracy. This phenomenon could be interpreted based on the analysis in Section 3.1.1 and Figure 2 (b). Specifically, those regularizers result in more compressed representations, damaging the transferability of the representations. Besides, the label smoothing strategy also affects the weights of old classes in the classifier, thus increasing the classifier bias. Similar results have also been reported in [61]. Discussion of Covariance Matrix. In our main experiments, we use the original covariance matrix for semanAug. However, storing the original covariance matrix might be inefficient when the matrix dimension is large. An alternative way is to only store the elements on the diagonal, which could greatly reduce the cost of memory. Figure 7 also reports the results of using the diagonal covariance matrix. Under different settings, using the original covariance matrix is slightly better than the diagonal form. This is reasonable because the original covariance matrix stores more distribution information of old classes. However, using the diagonal covariance matrix would be more memory-efficient in practice. ClassAug Improves Confidence Reliability. During continuous use of a machine learning system in open-world applications, there are mainly three key steps [62]. The first step is out-of-distribution (OOD) detection [63], which requires the system to detect unknown samples from novel classes. The second step is to label the collected unknown samples by humans or automatic algorithms [64]. Finally, the system must scale and adapt incrementally to learn the novel classes, which is the Class-IL problem studied in this paper. Recently studies found that DNNs are overconfident for their predictions [63, 65], lacking the ability to detect samples from unknown classes. In real-world applications, we expect a continual learner has good OOD detection ability. We explore the OOD detection ability of the proposed classAug. Concretely, we train a ResNet-18 on CIFAR-10, and the test samples from CIFAR-10 are in-distribution. For OOD examples, we test on MNIST [66], Fashion-MNIST [67], LSUN (resized) [68] and Tiny-ImageNet (resized). As shown in Table 2, classAug noticeably improves the OOD detection performance of baseline [63] on commonly used metrics such as AUROC, AUPR-In and AUPR-Out [63]. By recognizing synthetic samples, DNNs could learn more robust and transferable representations which could be generalized to OOD samples. Moreover, as shown in Table 2, Mixup sometimes damages the performance of OOD detection, which further demonstrates the superiority of classAug. 5 Conclusion In this paper, we propose a simple and effective dual augmentation framework to address the representation bias and classifier bias in Class-IL. We first investigate the transferability (or forgetting) of representations via spectral decomposition, which motivates us to propose classAug that can learn transferable, diverse and less compact representations for IL. Furthermore, we propose to use semanAug to implicitly generate infinite instances of old classes in the deep feature space during jointly learning of the unified classifier. Experiments show that our method could achieve remarkable performance compared with state-of-the-art Class-IL methods. Future works will consider the dual augmentation framework for more challenging scenarios like Class-IL with distribution shift and OOD data, few-shot Class-IL, and federated incremental learning. Acknowledgements This work has been supported by the National Key Research and Development Program under Grant No. 2018AAA0100400, the National Natural Science Foundation of China (NSFC) grants U20A20223, 61633021, 62076236, 61721004, the Key Research Program of Frontier Sciences of CAS under Grant ZDBS-LY-7004, and the Youth Innovation Promotion Association of CAS under Grant 2019141.
1. What is the main contribution of the paper regarding class-incremental learning? 2. What are the strengths of the proposed approach, particularly in addressing representation bias and regularization impacts? 3. What are the weaknesses of the paper, such as complexity analysis and limited experimentation on diverse datasets? 4. Do you have any questions or suggestions regarding the novel dual augmentation framework? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a class-incremental learning (CIL) method that addresses the representation-bias and classifier-bias observed in CIL. The goal is to learn transferable ans compact representations for incremental learning and leverage the distribution of old classes to avoid forgetting and to jointly learn a unified classifier. This paper also presents an interesting finding that regularization in CIL have a negative effect. Review This paper explores important questions in CIL: i) which parts of the representations are tend to be forgotten, ii) how regularization impacts representation learning and iii) how to facilitate representation learning in CIL. To analyze the forgetting, they use spectral decompositions, obtain the eigenvalues and eigenvectors, and check the cosine similarity between eigenvectors. To learn the transferable representations, they propose to use class augmentations. To learn the classifier, semantic augmentation is used. Class augmentation and semantic augmentation are called as dual augmentation framework and it is the proposal of this paper. To the best of my understanding, this approach is novel and has not been used before. Besides this approach doesn't require examplers and replay. It would be useful to clarify a few points. In section 3.1.1 it is mentioned that intuitively the corresponding angle could capture the representation shift between the old and updated feature extractors during incremental learning, and reflect the forgetting along different directions in feature space. It is not clear why intuitively this is correct. The complexity of dual augmentation framework is not analyzed in the paper. The experiments show that the results of no augmentation, only semantic augmentation, only class augmentation and their combination and it seems the combination of both augmentation outperforms the others by margin, but it is not clear how costly to run the combination, which augmentation is cheaper etc. The experiments are performed on CIFAR-100 and Tiny-ImageNet. These datasets are called challenging but it is not explained in which sense they are challenging. To the best of my knowledge, they are standard benchmarking datasets for incremental learning and it would be useful to run additional experiments on real challenging datasets where there are data samples from different domains. The structure of the paper is nice, the problem, motivation and the goal is clear. But there are so many typos and grammatical mistakes that makes to follow the paper difficult. Some examples: p5, l161 two groups "of" eigenvectors, l170 firstly trained -> first trained, l185 simimlar -> similar, p7, l263 discribes -> describes, l266 foucs -> focus, diffrnt -> different, ... etc.
NIPS
Title Class-Incremental Learning via Dual Augmentation Abstract Deep learning systems typically suffer from catastrophic forgetting of past knowledge when acquiring new skills continually. In this paper, we emphasize two dilemmas, representation bias and classifier bias in class-incremental learning, and present a simple and novel approach that employs explicit class augmentation (classAug) and implicit semantic augmentation (semanAug) to address the two biases, respectively. On the one hand, we propose to address the representation bias by learning transferable and diverse representations. Specifically, we investigate the feature representations in incremental learning based on spectral analysis and present a simple technique called classAug, to let the model see more classes during training for learning representations transferable across classes. On the other hand, to overcome the classifier bias, semanAug implicitly involves the simultaneous generating of an infinite number of instances of old classes in the deep feature space, which poses tighter constraints to maintain the decision boundary of previously learned classes. Without storing any old samples, our method can perform comparably with representative data replay based approaches. 1 Introduction Deep neural networks (DNNs) have enabled great success in many machine learning tasks, based on stationary, large-scale, computationally expensive, and memory-intensive training data [1, 2, 3]. Yet the need of the ability to acquire sequential experience in dynamic and open environments [4, 5, 6] poses a serious challenge to modern deep learning systems, which only perform well on homogenized, balanced, and shuffled data [7]. Typically, DNNs suffer from drastic performance degradation of previously learned tasks after learning new knowledge, which is a well-documented phenomenon, known as catastrophic forgetting [8, 9, 10]. Recently, incremental learning (IL), also referred to as lifelong learning or continual learning, has received extensive attention [11, 12, 13, 14] to enable DNNs to preserve and extend knowledge continually. Many earlier studies focus on task-incremental learning, which uses separate output layers for different tasks, and needs the task identity for inference [11, 15, 16]. In this work, we consider a more realistic and challenging setting of class-incremental learning (Class-IL), where the model only has access to data of new classes at each stage and needs to learn a unified classifier that can classify all seen classes [13, 17, 18]. Unfortunately, the learning paradigm of Class-IL will lead to two problems: representation bias and classifier bias, as shown in Figure 1. First, for representation learning, if the feature extractor is fixed after learning old classes, the learned representations could be preserved, but suffer from the lack of transferability for new classes; on the contrary, if we update the feature extractor on new classes, the updated representations would be no longer suitable for old classes. Consequently, the old and new classes would be easily overlapped in the deep feature space. We ∗Corresponding author. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). denote this dilemma as the representation bias. Second, to distinguish new classes from old classes, the training loss is typically calculated on all classes. Without old training data, the class weights of old classes would be ill-updated and mismatched with the updated representation space. We denote this dilemma as the classifier bias. In this work, we investigate the learning of representation and classifier in incremental learning and propose a simple and effective dual augmentation framework to overcome these two biases in Class-IL without storing and replaying training data of old classes. Learning Representation for Incremental Learning. Existing works typically regularize network parameters explicitly [11, 15, 16] or implicitly [12] to reduce the representation shift when learning new classes. In this paper, instead of asking how to keep previously learned representations unchanged, we investigate the following question: What properties of learned representations could facilitate incremental learning? We hypothesize that learning transferable and diverse representations is an important requirement for incremental learning. Intuitively, with such representations, it could be easier to find a model to perform well on all tasks and improve both plasticity and stability, since different tasks would be closer in the parameters space. From a spectral analysis viewpoint, we investigate which components of feature representations are more transferable and less forgettable in the incremental learning process. It is found that spectral components with large eigenvalues are less forgettable. Furthermore, we exploit this finding to propose a simple technique named classAug, which can enlarge the spectral components to introduce more diverse and transferable representations for incremental learning. Learning Classifier for Incremental Learning. Recently, several works were proposed to alleviate the classifier bias in data replay based methods [18, 19, 20]. However, in non-exemplar based (i.e., without storing and replaying old data) Class-IL setting, the classifier bias is more serious and the above methods can not be directly used. A straightforward way is storing instances of old classes in the deep feature space. However, this strategy is undesirable due to the limited memory resource and scalability. This work delves into the classifier learning for Class-IL and proposes an implicit semantic augmentation (semanAug) approach to generate an infinite number of instances of old classes in the deep feature space by leveraging the distribution information. SemanAug is inspired by MCF [21] and ISDA [22], which have performed semantic augmentation for linear models and DNNs, respectively. However, both our way to leverage semantic augmentation and the motivation fundamentally differ from them [21, 22]. Contributions. (i) We provide new insights into the representation learning in incremental learning by analyzing the structural characteristics of the learned embedding space via spectral decomposition and find that spectral components with large eigenvalues are less forgettable and carry more transferable features. Based on this observation, we propose a simple and effective method of classAug to learn better embedding space for incremental learning. (ii) For classifier learning in incremental learning, we propose semanAug which implicitly involves simultaneous generating an infinite number of instances of old classes in the deep feature space to maintain the decision boundary of previously learned classes. (iii) Extensive experiments on benchmark datasets demonstrate the superior performance of our dual augmentation framework for the challenging scenario of Class-IL. 2 Related Work Incremental Learning. Diverse approaches have been proposed for incremental learning of DNNs. They can be roughly divided into three categories: regularization based, data replay based, and architecture based approaches. Regularization based methods focus on weight regularization by estimating and preventing the important network weights from changing [11, 15, 16]. The difference among those methods is the way to compute the importance of the parameters. However, it is hard to design a reasonable metric to measure the importance of parameters, and it is known that regularization strategies show poor performance in Class-IL scenario [23, 24]. Data replay based methods address both the representation bias and classifier bias straightforwardly by storing a fraction of old data to jointly train the model with current data. With stored real samples, some works [17, 13, 25] use a distillation loss to prevent forgetting, while others [26, 27, 28] develop gradient-based regularization to make more efficient use of the rehearsal data. To avoid storing real data, another line of works generates pseudo-samples of all previous classes for replay using deep generative models [29, 30, 31, 32]. Nevertheless, storing real data is undesirable for resource-limited or privacy and safety concerning scenarios. Moreover, training big generative models for complex datasets is inefficient. Architecture based methods dynamically extend the network structure during the course of incremental learning [33, 34, 35, 36]. However, growing architecture is unfeasible for large numbers of tasks, and those methods are often impractical for Class-IL. Data Augmentation. Literature is rich on data augmentation for improving the generalization of DNNs. Classical strategies commonly synthesize “positive” new samples in a way that is consistent with the underlying data distribution of the original dataset [3]. Recent works show that label mixing based methods such as Mixup [37] and Cutmix [38] can greatly improve the generalization of DNNs. In complement to the input space augmentations mentioned above, some works have explored feature space augmentations which augment the learned representations in deep embedding space to enhance classifier performance. The intuition behind those works is that certain directions in the deep feature space correspond to meaningful semantic transformations [39, 40]. For instance, deep feature interpolation [40] leverages simple interpolations in the embedding space to achieve semantic augmentation. A recently proposed ISDA [22] performs semantic augmentation by estimating and leveraging the category-wise distribution of deep representations in an online manner. Despite the simplicity, ISDA shows its effectiveness in semi-supervised learning [22], contrastive learning [41], domain adaptation [42] and long-tailed recognition [43]. 3 Dual Augmentation Framework for Class-Incremental Learning We first formalize the problem of Class-IL, and then introduce the proposed classAug for representation learning and semanAug for classifier learning, respectively. Finally, we present the dual augmentation framework for Class-IL by combing the two augmentations. Problem Definition. Typically, a Class-IL problem involves the sequential learning of T tasks that consist of disjoint classes sets, and the model has to classify all seen classes at any given point in training. At incremental step t ∈ {1, ..., T }, (x, y) ∈ Dt denotes a training sample, where x is an sample in the input space X and y ∈ Ct is its corresponding label. Ct is the class set of task t. To facilitate analysis, we represent the DNN based model with two components: a feature extractor and a unified classifier. Specifically, the feature extractor fθ : X → Z , parameterized by θ, maps the input x into a feature vector z = fθ(x) ∈ Rd in the deep feature space Z; the unified classifier gϕ : Z → RC1:t , parameterized by ϕ, produces a probability distribution gϕ(z) as the prediction for x. Denote the overall parameters by Θ = (θ,ϕ). The general objective is to correctly classify test examples from all seen classes [44]. The key challenge of Class-IL is that data from previous tasks are assumed to be unavailable, which means that the best configuration of the model for all seen tasks must be sought by minimizing the predefined loss function L (e.g., cross-entropy) on current data Dt: argmin θ,ϕ E(x,y)vDt [L(gϕ(fθ(x)), y)]. (1) A widely used strategy to preserve old knowledge is knowledge distillation [45], which typically matches the current model with previous model response to current training data using the teacherstudent framework [12, 13, 19]. 3.1 Learning Representation with Class Augmentation As we focus on non-exemplar based Class-IL, we intentionally avoid storing training samples of old classes. To maintain the generalizability of the learned representations for old classes, existing methods typically restrain the feature extractor from changing [11, 15, 16, 12]. However, this would lead to a trade-off between the plasticity and stability [5], and it would be hard to perform long-step incremental learning. Our high-level idea is to learn transferable and diverse representations to bridge the old and new classes in a better feature space. To delve into this problem, we want to answer two questions: (1) Which part of feature representations tends to be forgotten in incremental learning? (2) How to facilitate the representation learning for incremental learning? 3.1.1 Analyzing Forgetting via Spectral Decomposition In what follows, we explore which part of feature representations tends to be forgotten and may not be transferable across different tasks in incremental learning. To this end, we propose to quantify the sensitivity of the model to different directions in the deep feature space by measuring the similarity of the space before and after learning new tasks. Formally, given a feature extractor fθ,old trained on dataset Dold = {(xi, yi)}ni=1. A new dataset Dnew that contains disjoint classes with Dold is used to update fθ,old, and the updated feature extractor is denoted as fθ,new. For the samples in Dold, we can get two groups of deep features mapped by fθ,old and fθ,new, respectively. Using eigenvalue decomposition, we could respectively decompose the features mapped by original feature extractor (i.e., fθ,old(xi)) as well as the features mapped by updated feature extractor (i.e., fθ,new(xi)) to different directions as following: 1 n n∑ i=1 fθ(xi)fθ(xi) T = d∑ j=1 ujλju T j , (2) where λj represents the eigenvalue with index j and uj is its eigenvector. d is the dimensionality of the feature space. Through spectral factorization in Eq. (2), we can represent the original and new representations with two groups of eigenvectors: {uold,1, ...,uold,d} and {unew,1, ...,unew,d}. Next, we investigate the forgetting or transferability of each direction. Shonkwiler [46] introduced the principal angles [47] to measure the similarity of two subspaces. However, it is unreasonable to treat all eigenvectors equally to calculate the principal angles, regardless of their relative eigenvalues. Inspired by [48], we use corresponding angles, denoted by ψ, to explore the distance between two subspaces in incremental learning: Definition 1 (Corresponding Angle) Given two groups of eigenvectors: {uold,1, ...,uold,d} and {unew,1, ...,unew,d}, corresponding angle represents the angle between two eigenvectors corresponding to the same eigenvalue value index. The cosine value of the corresponding angle is: cos(ψj) = 〈uold,j ,unew,j〉 ‖uold,j‖ · ‖unew,j‖ , (3) where uold,j is the j-th eigenvectors with the j-th largest eigenvalue in the old feature space, and similarly for unew,j . Note that ‖uold,j‖ = 1 and ‖unew,j‖ = 1. For IL, the meaning of “preserve old knowledge” refers to maintain the previously learned decision boundary among classes. At representation level, for an old class, the shape (i.e., covariance) of the distributions should not be changed too much. If an eigenvector direction only changes slightly after updating the feature extractor, the corresponding angle is small, and vice versa. Intuitively, the corresponding angle could capture the representation shift between the old and updated feature extractor during incremental learning, and reflect the forgetting along certain directions in the deep feature space. Based on the metric defined above, we explore the forgetting of different directions in Class-IL. We use LwF-MC [12, 13] as baseline method and train a ResNet-18 [1] on CIFAR-100 [49] using SGD in a 2-step manner. Concretely, the model is first trained on the first 50 classes and then updated on the other 50 classes. Figure 2 (a) shows the absolute cosine values of corresponding angles between the old and new eigenvectors. We can observe that eigenvectors with larger eigenvalues produce larger similarity (small corresponding angles), which indicates those directions are more transferable and less forgettable across different tasks. On the contrary, the eigenvectors with small eigenvalues prefer to move after updating the model on new tasks, and could be regarded as forgettable directions. Transferable and Diverse Representations. As demonstrated above, the directions with larger eigenvalues transfer better and suffer less forgetting. This thought-provoking observation indicates that our learned representations should have the following properties: (1) Transferability: the eigenvalues of those several significant directions should be enlarged to transfer across tasks (or classes). (2) Diversity: the number of the directions with significant eigenvalues should be increased. Note that those properties are different from that in the common single-task learning scenario. Actually, reducing the number of directions with significant variance has been seen as a form of feature compression [51], which is linked to generalization by information theory [52, 53]. However, the usual concepts of generalization may not entirely be appropriate for IL, since standard learning only aims to learn compact representations within training classes without considering new class generalizability. In IL, those less discriminative directions for the current task could capture useful representations for future tasks. A recent paper [54] has shown that strong compressed representations can actually hurt the generalization ability in the deep metric learning setting. Therefore, to reduce forgetting and enhance the transferability of the representations, it is important to enlarge the eigenvalues and increase the number of eigenvectors with significant variance. 3.1.2 Learning Representations via Class Augmentation We now exploit our above analysis to propose a simple method for representation learning in ClassIL. Our key idea is to learn transferable and diverse representations by learning more classes at each incremental stage t. To do so, a direct way is to introduce real classes from other datasets as auxiliary. However, it is unrealistic to always have access to other real classes, and which datasets should be used remains unknown. Therefore, we propose class augmentation (classAug) to augment the original classes by synthesizing auxiliary classes based on Dt. Concretely, inspired by Mixup [37], classAug randomly interpolates two samples xa and xb from two different classes a and b to generate a new sample xnewab representing a new class: xnewab = λxa + (1− λ)xb, (4) where λ is a random number of interpolation coefficient. For a k-class problem, we can generate k(k − 1)/2 new classes using the above method, which can be further merged to m auxiliary classes. As a result, the original k-class problem in the current task is extended to a (k +m)-class problem. Moreover, we restrict the λ to be sampled from the interval of [0.4, 0.6], to reduce the overlap between the augmented and original classes. At the end of each IL stage, the augmented class nodes in the classifier would be removed. Discussion. The proposed classAug is related to Mixup [37] which applies random interpolation on a pair of training samples and the respective one-hot labels. However, the interpolated samples in Mixup are near original data, and the number of classes is not changed, but in our method, it is increased. By learning to classify more classes in each stage t, the model could learn more transferable and diverse representations. Figure 2 (b) displays and compares the eigenvalues 2 of representations learned with different methods on the first 50 classes of CIFAR-100. It is obvious that the proposed classAug can enhance the value of eigenvalues significantly, and produce more directions with significant variance compared with other methods. On the contrary, Mixup and Label-Smoothing (LS) [50] lead to significantly smaller eigenvalues for the several top eigenvectors, which represent more compact representations. Indeed, the compression effect of soft-label based methods has also been demonstrated in [51, 50]. As shown in Section 4.3, classAug can improve the performance of Class-IL significantly, while Mixup and LS have negative effect in our experiments. 2To visualize the distribution clearly, we do not include the largest eigenvalue in the figure. 3.2 Learning Classifier with Semantic Augmentation As demonstrated in Section 1, classifier bias is another problem in Class-IL. When learning new classes, the previously learned decision boundary would suffer from catastrophic distortion and thus the test samples from old classes could be easily mapped to wrong classes. To overcome this issue, we propose semantic augmentation (semanAug), which leverages the distribution information (i.e., class mean and covariance) of old classes to regularize the learning of the classifier. Formally, for each old class k ∈ {1, ..., Cold}, we can generate M instances in the deep feature space from its distribution, i.e., z̃k v N (µk, γΣk), in which γ is a non-negative coefficient. Then the generated instances of old classes and real instances of new classes in the deep feature space can be jointly fed to the classifier for minimizing cross-entropy loss: Lt = 1 nt nt∑ i=1 −log ( eϕ T yi zi+byi∑Call c=1 e ϕTc zi+bc ) ︸ ︷︷ ︸ Lt,new: loss on real features of new classes + 1 Cold Cold∑ k=1 1 M M∑ m=1 −log ( eϕ T k z̃k,m+bk∑Call c=1 e ϕTc z̃k,m+bc ) ︸ ︷︷ ︸ Lt,old: loss on generated features of old classes , (5) where nt is the number of training samples in current task dataset Dt, Cold is the number of total old classes upon stage t, and Call = Cold + Ct is the number of all seen classes at stage t. ϕ = [ϕ1, ...,ϕCall ] T ∈ RCall×d and b = [b1, ..., bCall ]T ∈ RCall are the weight matrix and bias vector of the last fully connected layer, respectively. In Class-IL, the second term in Eq. (5), Lt,old, is computationally inefficient when M and Cold are large. In the following, we present an easy-to-compute way to implicitly generate infinite instances in the deep feature space for old classes. Upper bound of Lt,old. Concretely, in the case of M →∞, the second term in Eq. (5): Lt,old = 1 Cold Cold∑ k=1 Ez̃k [ −log ( eϕ T k z̃k+bk∑Call c=1 e ϕTc z̃k+bc )] = 1 Cold Cold∑ k=1 Ez̃k [ log (Call∑ c=1 e(ϕ T c −ϕ T k )z̃k+(bc−bk) )] 6 1 Cold Cold∑ k=1 log ( Ez̃k [Call∑ c=1 e(ϕ T c −ϕ T k )z̃k+(bc−bk) ]) = 1 Cold Cold∑ k=1 log (Call∑ c=1 ev T c,kµk+(bc−bk)+ γ 2 vTc,kΣkvc,k ) . (6) In above equation, vc,k = ϕc − ϕk. The inequality is based on Jensen’s inequality E[log(X)] 6 logE[X], and the last equality is obtained by using the moment-generating function E[etX ] = etµ+ 1 2σ 2t2 , X v N (µ, σ2), due to the fact that (ϕc − ϕk)z̃k + (bc − bk) is a Gaussian random variable. As can be seen, Eq. (6) is an upper bound of original Lt,old, which provides an elegant and much efficient way to implicitly generate infinite instances in the deep feature space for old classes. The Lt,old in Eq. (6) can be write in the common cross-entropy loss form: Lt,semanAug , Lt,old = 1 Cold Cold∑ k=1 −log ( eϕ T kµk+bk∑Call c=1 e ϕTc µk+bc+ γ 2 v T c,kΣkvc,k ) . (7) Intuitively, Lt,old implicitly performs semantic transformations for µk based on Σk. To maintain the decision boundary, γ should be smaller if the distribution of a class is near the decision boundary; instead, γ should be bigger if the distance is relatively far. We set γ = 2 in our experiments. In addition, we can observe that when γ = 0, only the class means are used for knowledge retention. Discussion. (1) Although the derivation of the upper bound in Eq. (6) is similar with ISDA [22], both our motivation and the way to leverage semanAug are different from ISDA. When learning new classes, we only apply semanAug for the class mean of each old class based on the memorized distribution information. While ISDA applies semanAug on all the training samples to improve generalization in standard supervised learning. In addition, a crucial step in ISDA is to estimate the mean and covariance matrix of each class in an online manner. Differently, semanAug is naturally suitable for Class-IL, since the distribution of old classes can be estimated with all training samples at the end of each learning stage. (2) Using previous class statistics for IL has also been explored in IL2M [55]. However, our method differs from IL2M in both the statistics information and the way to leverage them. First, The class statistics in IL2M is the prediction score of the classifier, while ours is the class distribution statistics in the deep feature space. Second, IL2M uses the class statistics to calibrate the prediction of a continual learner in a post-processing manner, while our method leverage the statistics to automatically learn a balanced classifier. 3.3 The Dual Augmentation Learning Framework With classAug for representation bias and semanAug for classifier bias, Figure 4 describes the learning process of the dual augmentation framework (IL2A). We also use the well-known knowledge distillation (KD) [19] for two reasons. Firstly, classAug and KD are complementary and focus on different aspect of learning representation. Secondly, KD can reduce the change of feature extractor, which is crucial for semanAug because it implicitly generate instances in the deep feature space from old distribution. The total learning objective at each stage t is as following: Lt = Lt,new + αLt,semanAug + βLt,kd, (8) where α and β are two hyper-parameters. Lt,new and Lt,semanAug are shown in Eq. (5) and Eq. (7), respectively. Lt,kd = 1nt ∑nt i=1 ‖fθt−1(xi) − fθt(xi)‖. Note that Lt,new and Lt,semanAug are applied to both the original and synthesized samples. Algorithm 1 presents the pseudo code of IL2A. 4 Experiments 4.1 Evaluation Protocol Algorithm 1: IL2A: Dual augmentation algorithm Randomly initialize Θ0 = {θ0,ϕ0}; S0 = ∅; foreach incremental stage t ∈ {1, ..., T } do Input: model Θt−1, data Dt = {(xi, yi)}nti=1; Output: model Θt; Θt ← Θt−1; Dt,aug = {(x′i, y′i)} n′t i=1 via classAug; add class nodes for augmented classes; if t = 1 then train Θt by minimizing L(gϕ(fθ(x′)), y′); else train Θt by minimizing Eq. (8); s← compute {µ,Σ} for each class in Dt; St ← St−1 ∪ s; remove augmented class nodes in classifier; Datasets. We perform our experiments on CIFAR-100 [49] and Tiny-ImageNet [56]. A common setting is to train the model on half of classes for first task, and equal classes in the remaining incremental steps. Based on this, we split the CIFAR-100 dataset in different settings: 50 + 5× 10, 50+10×5, 40+20×3. For instance, 50+ 10×5 represents that the first task contains 50 classes and there are 5 classes for the following 10 tasks. Similarly, the settings for Tiny-ImageNet are 100+5×20, 100+ 10×10 and 100+20×5. Intuitively, more classes in each tasks requires the model to learn a harder problem for each task, while increasing the length of the task sequence challenges the model’s retention. Implementation Details. In our experiments, we follow [44] to utilize the ResNet-18 [1] as our base architecture, and train it from scratch in each experiment. All models are trained using Adam [57] optimizer with an initial learning rate of 0.001 for 100 epochs with the mini-batch size of 64. The learning rate is reduced by a factor of 10 at 45 and 90 epochs. We use the same hyper-parameter value for all experiments. Specifically, we set α = 10 and β = 10 in Eq. (8). The number of augmented classes (i.e. The number of augmented classes (i.e., m) depends on the number of (original) classes at current incremental step. Taking CIFAR-100 as an example, the m is 45 for 5 phases setting where each incremental step has 10 classes; and m is 10 for 10 phases setting where each incremental step has 5 classes. At the end of each incremental stage, we evaluate the model on all seen classes after removing the class nodes of the m augmented classes in the classifier. Our code is available at https://github.com/Impression2805/IL2A. Comparison Methods. Our method (IL2A) does not store any old samples for replay when learning new classes. Therefore, we first compare IL2A with several non-exemplar based approaches: MAS [16], LwF-MC [13], MUC [58], LwM [59]. In addition, we also compare with several exemplar based methods such as iCaRL [13], EEIL [18] and LUCIR [19]. Specifically, for the data replay based methods, we follow [13, 19] to store 20 samples for each class using ‘herd’ selection technique [13]. We report the average top-1 accuracy of all previously seen classes up to each incremental step t. For iCaRL, we respectively report its results of CNN predictions and nearest-mean-of-exemplars classification, denoted as iCaRL-CNN and iCaRL-NME. 4.2 Experimental Results Main Results. Comparative results are shown in Figure 5. Firstly, we observe that our method performs much better than non-exemplar based methods such as LwF-MC and MUC in the trend of accuracy curve under different settings. Particularly, the gap appears unbridgeable in the long-step Class-IL setting, e.g., 10 phases and 20 phases. This suggests that only constraining old parameters does not suffice to prevent forgetting. We argue that this is partly due to the unaddressed classifier bias. When compared to representative data replay based methods such as iCaRL, EEIL and LUCIR, our method remarkably shows strong performance without storing old samples. The success of our method can contribute to the proposed classAug and semanAug. Specifically, classAug is applied to new classes of current task, which enables the model to learn more transferable and diverse representations for future classes and in turn, reduces the forgetting of old parameters when learning new classes. While semanAug is applied to old classes of previous tasks, which leverage the valuable distribution information of old classes to learn a unified classifier to connect the classes from different tasks to each other. Ablation Study. To evaluate the effect of each component in IL2A, we perform the ablation study and show the results of 10 phases setting (CIFAR-100) in Table 1. Specifically, the baseline denotes the method that does not generate pseudo-instance using semanAug, but only replays the class-mean of each old class when training new classes. By doing so, we aim to validate the effectiveness of semanAug compared with only replaying class-mean. In summary, we can observe that: (1) Baseline improves the performance of KD significantly. (2) SemanAug improves the performance of baseline from 34.71% to 42.09%. Those results indicate the effect of the distribution information for maintaining old knowledge in Class-IL. (3) ClassAug also has remarkably effect on baseline, and (4) the performance can be further improved by combing with semanAug, which indicates that those two modules are complementary. Similar results are observed in other settings of CIFAR-100 and Tiny-ImageNet datasets. (5) As for the computational complexity, classAug involves input level sample mixing and the augmented samples are fed to feature extractor. Differently, semanAug performs implicit old instance generation in the deep feature space. Therefore, semanAug is cheaper compared with classAug from the computation perspective. 4.3 Further Analysis ClassAug Improves both Plasticity and Stability in Class-IL. To analyze the effectiveness of classAug more concretely, we explore how it affects the new tasks accuracy (↑) and average forgetting (↓) (CIFAR-100, 10 phases setting). Average forgetting [60] is defined to estimate the forgetting of previous tasks. The forgetting measure f ik of the i-th task after training k-th task is defined as f ik = max t∈1,...,k−1 (at,i − ak,i),∀i < k, in which am,n is the accuracy of task n after training task m. The average forgetting measure Fk is then defined as Fk = 1k−1 ∑k−1 i=1 f i k. Intuitively, new task accuracy can be viewed as the plasticity of the incremental learner and the average forgetting can be viewed as the stability of the incremental learner. Figure 6 (a) and (b) report the results, from which we see that classAug simultaneously improves the new task accuracy and reduces the average forgetting. Specifically, the significant improvement on new task accuracy implies that the model training with classAug is a good initialization for the following tasks. Consequently, classAug is effective to improve the trade-off between plasticity and stability of a continual learner. Compare ClassAug with Other Regularizers. We compare the proposed classAug with Mixup and LS in Figure 6 (c), where the baseline (with semanAug) represents our IL2A without using classAug. As can be seen, Mixup and LS have negative effect on the final accuracy. This phenomenon could be interpreted based on the analysis in Section 3.1.1 and Figure 2 (b). Specifically, those regularizers result in more compressed representations, damaging the transferability of the representations. Besides, the label smoothing strategy also affects the weights of old classes in the classifier, thus increasing the classifier bias. Similar results have also been reported in [61]. Discussion of Covariance Matrix. In our main experiments, we use the original covariance matrix for semanAug. However, storing the original covariance matrix might be inefficient when the matrix dimension is large. An alternative way is to only store the elements on the diagonal, which could greatly reduce the cost of memory. Figure 7 also reports the results of using the diagonal covariance matrix. Under different settings, using the original covariance matrix is slightly better than the diagonal form. This is reasonable because the original covariance matrix stores more distribution information of old classes. However, using the diagonal covariance matrix would be more memory-efficient in practice. ClassAug Improves Confidence Reliability. During continuous use of a machine learning system in open-world applications, there are mainly three key steps [62]. The first step is out-of-distribution (OOD) detection [63], which requires the system to detect unknown samples from novel classes. The second step is to label the collected unknown samples by humans or automatic algorithms [64]. Finally, the system must scale and adapt incrementally to learn the novel classes, which is the Class-IL problem studied in this paper. Recently studies found that DNNs are overconfident for their predictions [63, 65], lacking the ability to detect samples from unknown classes. In real-world applications, we expect a continual learner has good OOD detection ability. We explore the OOD detection ability of the proposed classAug. Concretely, we train a ResNet-18 on CIFAR-10, and the test samples from CIFAR-10 are in-distribution. For OOD examples, we test on MNIST [66], Fashion-MNIST [67], LSUN (resized) [68] and Tiny-ImageNet (resized). As shown in Table 2, classAug noticeably improves the OOD detection performance of baseline [63] on commonly used metrics such as AUROC, AUPR-In and AUPR-Out [63]. By recognizing synthetic samples, DNNs could learn more robust and transferable representations which could be generalized to OOD samples. Moreover, as shown in Table 2, Mixup sometimes damages the performance of OOD detection, which further demonstrates the superiority of classAug. 5 Conclusion In this paper, we propose a simple and effective dual augmentation framework to address the representation bias and classifier bias in Class-IL. We first investigate the transferability (or forgetting) of representations via spectral decomposition, which motivates us to propose classAug that can learn transferable, diverse and less compact representations for IL. Furthermore, we propose to use semanAug to implicitly generate infinite instances of old classes in the deep feature space during jointly learning of the unified classifier. Experiments show that our method could achieve remarkable performance compared with state-of-the-art Class-IL methods. Future works will consider the dual augmentation framework for more challenging scenarios like Class-IL with distribution shift and OOD data, few-shot Class-IL, and federated incremental learning. Acknowledgements This work has been supported by the National Key Research and Development Program under Grant No. 2018AAA0100400, the National Natural Science Foundation of China (NSFC) grants U20A20223, 61633021, 62076236, 61721004, the Key Research Program of Frontier Sciences of CAS under Grant ZDBS-LY-7004, and the Youth Innovation Promotion Association of CAS under Grant 2019141.
1. What are the main contributions of the paper regarding class-incremental learning? 2. What are the strengths of the proposed approach, particularly in representing learning? 3. Do you have any concerns or questions about the paper, especially regarding its clarity and experimental validation? 4. How does the reviewer assess the novelty and moderateness of the proposed method?
Summary Of The Paper Review
Summary Of The Paper The paper addresses two problems which occur in the class-incremental learning: representation bias and classifier bias. To address the representation bias, the authors propose to learn a less compact representation in each task. More concretely, they augment the number of existing classes by introducing random interpolations among real samples. Regarding classifier bias, the authors propose to generate an infinite number of features from of classes to maintain the decision boundary of previously learned classes. The insights into representation learning by analyzing the structural characteristics of the learned embedding space via spectral decomposition is interesting. The paper is relatively well written, although it lacks details in some aspects. The approach is moderately novel, being a combination of existing techniques. The experimental validation does not confirm that the proposed approach improve the current state of the art in class-incremental learning. Review Here are my concerns: The problems addressed in the current paper, namely representation bias and classifier bias, are not depicted clearly enough in Figure 1. I would recommend to improve the clarity of the figure. Page 5, 'Feature Compression Perspective': what is the difference between the 'overall features' and 'learned features'? Why there is a difference in dimensionality in both cases? Figure 4: According to section 3.1.2, the extracted features from f_(t-1) and f_t are further compressed, right? This step is not depicted in the figure. An Algorithm to summarize all the steps of the approach would increase the clarity of the paper. There are several typos in the paper, but most of them in page 7.
NIPS
Title Class-Incremental Learning via Dual Augmentation Abstract Deep learning systems typically suffer from catastrophic forgetting of past knowledge when acquiring new skills continually. In this paper, we emphasize two dilemmas, representation bias and classifier bias in class-incremental learning, and present a simple and novel approach that employs explicit class augmentation (classAug) and implicit semantic augmentation (semanAug) to address the two biases, respectively. On the one hand, we propose to address the representation bias by learning transferable and diverse representations. Specifically, we investigate the feature representations in incremental learning based on spectral analysis and present a simple technique called classAug, to let the model see more classes during training for learning representations transferable across classes. On the other hand, to overcome the classifier bias, semanAug implicitly involves the simultaneous generating of an infinite number of instances of old classes in the deep feature space, which poses tighter constraints to maintain the decision boundary of previously learned classes. Without storing any old samples, our method can perform comparably with representative data replay based approaches. 1 Introduction Deep neural networks (DNNs) have enabled great success in many machine learning tasks, based on stationary, large-scale, computationally expensive, and memory-intensive training data [1, 2, 3]. Yet the need of the ability to acquire sequential experience in dynamic and open environments [4, 5, 6] poses a serious challenge to modern deep learning systems, which only perform well on homogenized, balanced, and shuffled data [7]. Typically, DNNs suffer from drastic performance degradation of previously learned tasks after learning new knowledge, which is a well-documented phenomenon, known as catastrophic forgetting [8, 9, 10]. Recently, incremental learning (IL), also referred to as lifelong learning or continual learning, has received extensive attention [11, 12, 13, 14] to enable DNNs to preserve and extend knowledge continually. Many earlier studies focus on task-incremental learning, which uses separate output layers for different tasks, and needs the task identity for inference [11, 15, 16]. In this work, we consider a more realistic and challenging setting of class-incremental learning (Class-IL), where the model only has access to data of new classes at each stage and needs to learn a unified classifier that can classify all seen classes [13, 17, 18]. Unfortunately, the learning paradigm of Class-IL will lead to two problems: representation bias and classifier bias, as shown in Figure 1. First, for representation learning, if the feature extractor is fixed after learning old classes, the learned representations could be preserved, but suffer from the lack of transferability for new classes; on the contrary, if we update the feature extractor on new classes, the updated representations would be no longer suitable for old classes. Consequently, the old and new classes would be easily overlapped in the deep feature space. We ∗Corresponding author. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). denote this dilemma as the representation bias. Second, to distinguish new classes from old classes, the training loss is typically calculated on all classes. Without old training data, the class weights of old classes would be ill-updated and mismatched with the updated representation space. We denote this dilemma as the classifier bias. In this work, we investigate the learning of representation and classifier in incremental learning and propose a simple and effective dual augmentation framework to overcome these two biases in Class-IL without storing and replaying training data of old classes. Learning Representation for Incremental Learning. Existing works typically regularize network parameters explicitly [11, 15, 16] or implicitly [12] to reduce the representation shift when learning new classes. In this paper, instead of asking how to keep previously learned representations unchanged, we investigate the following question: What properties of learned representations could facilitate incremental learning? We hypothesize that learning transferable and diverse representations is an important requirement for incremental learning. Intuitively, with such representations, it could be easier to find a model to perform well on all tasks and improve both plasticity and stability, since different tasks would be closer in the parameters space. From a spectral analysis viewpoint, we investigate which components of feature representations are more transferable and less forgettable in the incremental learning process. It is found that spectral components with large eigenvalues are less forgettable. Furthermore, we exploit this finding to propose a simple technique named classAug, which can enlarge the spectral components to introduce more diverse and transferable representations for incremental learning. Learning Classifier for Incremental Learning. Recently, several works were proposed to alleviate the classifier bias in data replay based methods [18, 19, 20]. However, in non-exemplar based (i.e., without storing and replaying old data) Class-IL setting, the classifier bias is more serious and the above methods can not be directly used. A straightforward way is storing instances of old classes in the deep feature space. However, this strategy is undesirable due to the limited memory resource and scalability. This work delves into the classifier learning for Class-IL and proposes an implicit semantic augmentation (semanAug) approach to generate an infinite number of instances of old classes in the deep feature space by leveraging the distribution information. SemanAug is inspired by MCF [21] and ISDA [22], which have performed semantic augmentation for linear models and DNNs, respectively. However, both our way to leverage semantic augmentation and the motivation fundamentally differ from them [21, 22]. Contributions. (i) We provide new insights into the representation learning in incremental learning by analyzing the structural characteristics of the learned embedding space via spectral decomposition and find that spectral components with large eigenvalues are less forgettable and carry more transferable features. Based on this observation, we propose a simple and effective method of classAug to learn better embedding space for incremental learning. (ii) For classifier learning in incremental learning, we propose semanAug which implicitly involves simultaneous generating an infinite number of instances of old classes in the deep feature space to maintain the decision boundary of previously learned classes. (iii) Extensive experiments on benchmark datasets demonstrate the superior performance of our dual augmentation framework for the challenging scenario of Class-IL. 2 Related Work Incremental Learning. Diverse approaches have been proposed for incremental learning of DNNs. They can be roughly divided into three categories: regularization based, data replay based, and architecture based approaches. Regularization based methods focus on weight regularization by estimating and preventing the important network weights from changing [11, 15, 16]. The difference among those methods is the way to compute the importance of the parameters. However, it is hard to design a reasonable metric to measure the importance of parameters, and it is known that regularization strategies show poor performance in Class-IL scenario [23, 24]. Data replay based methods address both the representation bias and classifier bias straightforwardly by storing a fraction of old data to jointly train the model with current data. With stored real samples, some works [17, 13, 25] use a distillation loss to prevent forgetting, while others [26, 27, 28] develop gradient-based regularization to make more efficient use of the rehearsal data. To avoid storing real data, another line of works generates pseudo-samples of all previous classes for replay using deep generative models [29, 30, 31, 32]. Nevertheless, storing real data is undesirable for resource-limited or privacy and safety concerning scenarios. Moreover, training big generative models for complex datasets is inefficient. Architecture based methods dynamically extend the network structure during the course of incremental learning [33, 34, 35, 36]. However, growing architecture is unfeasible for large numbers of tasks, and those methods are often impractical for Class-IL. Data Augmentation. Literature is rich on data augmentation for improving the generalization of DNNs. Classical strategies commonly synthesize “positive” new samples in a way that is consistent with the underlying data distribution of the original dataset [3]. Recent works show that label mixing based methods such as Mixup [37] and Cutmix [38] can greatly improve the generalization of DNNs. In complement to the input space augmentations mentioned above, some works have explored feature space augmentations which augment the learned representations in deep embedding space to enhance classifier performance. The intuition behind those works is that certain directions in the deep feature space correspond to meaningful semantic transformations [39, 40]. For instance, deep feature interpolation [40] leverages simple interpolations in the embedding space to achieve semantic augmentation. A recently proposed ISDA [22] performs semantic augmentation by estimating and leveraging the category-wise distribution of deep representations in an online manner. Despite the simplicity, ISDA shows its effectiveness in semi-supervised learning [22], contrastive learning [41], domain adaptation [42] and long-tailed recognition [43]. 3 Dual Augmentation Framework for Class-Incremental Learning We first formalize the problem of Class-IL, and then introduce the proposed classAug for representation learning and semanAug for classifier learning, respectively. Finally, we present the dual augmentation framework for Class-IL by combing the two augmentations. Problem Definition. Typically, a Class-IL problem involves the sequential learning of T tasks that consist of disjoint classes sets, and the model has to classify all seen classes at any given point in training. At incremental step t ∈ {1, ..., T }, (x, y) ∈ Dt denotes a training sample, where x is an sample in the input space X and y ∈ Ct is its corresponding label. Ct is the class set of task t. To facilitate analysis, we represent the DNN based model with two components: a feature extractor and a unified classifier. Specifically, the feature extractor fθ : X → Z , parameterized by θ, maps the input x into a feature vector z = fθ(x) ∈ Rd in the deep feature space Z; the unified classifier gϕ : Z → RC1:t , parameterized by ϕ, produces a probability distribution gϕ(z) as the prediction for x. Denote the overall parameters by Θ = (θ,ϕ). The general objective is to correctly classify test examples from all seen classes [44]. The key challenge of Class-IL is that data from previous tasks are assumed to be unavailable, which means that the best configuration of the model for all seen tasks must be sought by minimizing the predefined loss function L (e.g., cross-entropy) on current data Dt: argmin θ,ϕ E(x,y)vDt [L(gϕ(fθ(x)), y)]. (1) A widely used strategy to preserve old knowledge is knowledge distillation [45], which typically matches the current model with previous model response to current training data using the teacherstudent framework [12, 13, 19]. 3.1 Learning Representation with Class Augmentation As we focus on non-exemplar based Class-IL, we intentionally avoid storing training samples of old classes. To maintain the generalizability of the learned representations for old classes, existing methods typically restrain the feature extractor from changing [11, 15, 16, 12]. However, this would lead to a trade-off between the plasticity and stability [5], and it would be hard to perform long-step incremental learning. Our high-level idea is to learn transferable and diverse representations to bridge the old and new classes in a better feature space. To delve into this problem, we want to answer two questions: (1) Which part of feature representations tends to be forgotten in incremental learning? (2) How to facilitate the representation learning for incremental learning? 3.1.1 Analyzing Forgetting via Spectral Decomposition In what follows, we explore which part of feature representations tends to be forgotten and may not be transferable across different tasks in incremental learning. To this end, we propose to quantify the sensitivity of the model to different directions in the deep feature space by measuring the similarity of the space before and after learning new tasks. Formally, given a feature extractor fθ,old trained on dataset Dold = {(xi, yi)}ni=1. A new dataset Dnew that contains disjoint classes with Dold is used to update fθ,old, and the updated feature extractor is denoted as fθ,new. For the samples in Dold, we can get two groups of deep features mapped by fθ,old and fθ,new, respectively. Using eigenvalue decomposition, we could respectively decompose the features mapped by original feature extractor (i.e., fθ,old(xi)) as well as the features mapped by updated feature extractor (i.e., fθ,new(xi)) to different directions as following: 1 n n∑ i=1 fθ(xi)fθ(xi) T = d∑ j=1 ujλju T j , (2) where λj represents the eigenvalue with index j and uj is its eigenvector. d is the dimensionality of the feature space. Through spectral factorization in Eq. (2), we can represent the original and new representations with two groups of eigenvectors: {uold,1, ...,uold,d} and {unew,1, ...,unew,d}. Next, we investigate the forgetting or transferability of each direction. Shonkwiler [46] introduced the principal angles [47] to measure the similarity of two subspaces. However, it is unreasonable to treat all eigenvectors equally to calculate the principal angles, regardless of their relative eigenvalues. Inspired by [48], we use corresponding angles, denoted by ψ, to explore the distance between two subspaces in incremental learning: Definition 1 (Corresponding Angle) Given two groups of eigenvectors: {uold,1, ...,uold,d} and {unew,1, ...,unew,d}, corresponding angle represents the angle between two eigenvectors corresponding to the same eigenvalue value index. The cosine value of the corresponding angle is: cos(ψj) = 〈uold,j ,unew,j〉 ‖uold,j‖ · ‖unew,j‖ , (3) where uold,j is the j-th eigenvectors with the j-th largest eigenvalue in the old feature space, and similarly for unew,j . Note that ‖uold,j‖ = 1 and ‖unew,j‖ = 1. For IL, the meaning of “preserve old knowledge” refers to maintain the previously learned decision boundary among classes. At representation level, for an old class, the shape (i.e., covariance) of the distributions should not be changed too much. If an eigenvector direction only changes slightly after updating the feature extractor, the corresponding angle is small, and vice versa. Intuitively, the corresponding angle could capture the representation shift between the old and updated feature extractor during incremental learning, and reflect the forgetting along certain directions in the deep feature space. Based on the metric defined above, we explore the forgetting of different directions in Class-IL. We use LwF-MC [12, 13] as baseline method and train a ResNet-18 [1] on CIFAR-100 [49] using SGD in a 2-step manner. Concretely, the model is first trained on the first 50 classes and then updated on the other 50 classes. Figure 2 (a) shows the absolute cosine values of corresponding angles between the old and new eigenvectors. We can observe that eigenvectors with larger eigenvalues produce larger similarity (small corresponding angles), which indicates those directions are more transferable and less forgettable across different tasks. On the contrary, the eigenvectors with small eigenvalues prefer to move after updating the model on new tasks, and could be regarded as forgettable directions. Transferable and Diverse Representations. As demonstrated above, the directions with larger eigenvalues transfer better and suffer less forgetting. This thought-provoking observation indicates that our learned representations should have the following properties: (1) Transferability: the eigenvalues of those several significant directions should be enlarged to transfer across tasks (or classes). (2) Diversity: the number of the directions with significant eigenvalues should be increased. Note that those properties are different from that in the common single-task learning scenario. Actually, reducing the number of directions with significant variance has been seen as a form of feature compression [51], which is linked to generalization by information theory [52, 53]. However, the usual concepts of generalization may not entirely be appropriate for IL, since standard learning only aims to learn compact representations within training classes without considering new class generalizability. In IL, those less discriminative directions for the current task could capture useful representations for future tasks. A recent paper [54] has shown that strong compressed representations can actually hurt the generalization ability in the deep metric learning setting. Therefore, to reduce forgetting and enhance the transferability of the representations, it is important to enlarge the eigenvalues and increase the number of eigenvectors with significant variance. 3.1.2 Learning Representations via Class Augmentation We now exploit our above analysis to propose a simple method for representation learning in ClassIL. Our key idea is to learn transferable and diverse representations by learning more classes at each incremental stage t. To do so, a direct way is to introduce real classes from other datasets as auxiliary. However, it is unrealistic to always have access to other real classes, and which datasets should be used remains unknown. Therefore, we propose class augmentation (classAug) to augment the original classes by synthesizing auxiliary classes based on Dt. Concretely, inspired by Mixup [37], classAug randomly interpolates two samples xa and xb from two different classes a and b to generate a new sample xnewab representing a new class: xnewab = λxa + (1− λ)xb, (4) where λ is a random number of interpolation coefficient. For a k-class problem, we can generate k(k − 1)/2 new classes using the above method, which can be further merged to m auxiliary classes. As a result, the original k-class problem in the current task is extended to a (k +m)-class problem. Moreover, we restrict the λ to be sampled from the interval of [0.4, 0.6], to reduce the overlap between the augmented and original classes. At the end of each IL stage, the augmented class nodes in the classifier would be removed. Discussion. The proposed classAug is related to Mixup [37] which applies random interpolation on a pair of training samples and the respective one-hot labels. However, the interpolated samples in Mixup are near original data, and the number of classes is not changed, but in our method, it is increased. By learning to classify more classes in each stage t, the model could learn more transferable and diverse representations. Figure 2 (b) displays and compares the eigenvalues 2 of representations learned with different methods on the first 50 classes of CIFAR-100. It is obvious that the proposed classAug can enhance the value of eigenvalues significantly, and produce more directions with significant variance compared with other methods. On the contrary, Mixup and Label-Smoothing (LS) [50] lead to significantly smaller eigenvalues for the several top eigenvectors, which represent more compact representations. Indeed, the compression effect of soft-label based methods has also been demonstrated in [51, 50]. As shown in Section 4.3, classAug can improve the performance of Class-IL significantly, while Mixup and LS have negative effect in our experiments. 2To visualize the distribution clearly, we do not include the largest eigenvalue in the figure. 3.2 Learning Classifier with Semantic Augmentation As demonstrated in Section 1, classifier bias is another problem in Class-IL. When learning new classes, the previously learned decision boundary would suffer from catastrophic distortion and thus the test samples from old classes could be easily mapped to wrong classes. To overcome this issue, we propose semantic augmentation (semanAug), which leverages the distribution information (i.e., class mean and covariance) of old classes to regularize the learning of the classifier. Formally, for each old class k ∈ {1, ..., Cold}, we can generate M instances in the deep feature space from its distribution, i.e., z̃k v N (µk, γΣk), in which γ is a non-negative coefficient. Then the generated instances of old classes and real instances of new classes in the deep feature space can be jointly fed to the classifier for minimizing cross-entropy loss: Lt = 1 nt nt∑ i=1 −log ( eϕ T yi zi+byi∑Call c=1 e ϕTc zi+bc ) ︸ ︷︷ ︸ Lt,new: loss on real features of new classes + 1 Cold Cold∑ k=1 1 M M∑ m=1 −log ( eϕ T k z̃k,m+bk∑Call c=1 e ϕTc z̃k,m+bc ) ︸ ︷︷ ︸ Lt,old: loss on generated features of old classes , (5) where nt is the number of training samples in current task dataset Dt, Cold is the number of total old classes upon stage t, and Call = Cold + Ct is the number of all seen classes at stage t. ϕ = [ϕ1, ...,ϕCall ] T ∈ RCall×d and b = [b1, ..., bCall ]T ∈ RCall are the weight matrix and bias vector of the last fully connected layer, respectively. In Class-IL, the second term in Eq. (5), Lt,old, is computationally inefficient when M and Cold are large. In the following, we present an easy-to-compute way to implicitly generate infinite instances in the deep feature space for old classes. Upper bound of Lt,old. Concretely, in the case of M →∞, the second term in Eq. (5): Lt,old = 1 Cold Cold∑ k=1 Ez̃k [ −log ( eϕ T k z̃k+bk∑Call c=1 e ϕTc z̃k+bc )] = 1 Cold Cold∑ k=1 Ez̃k [ log (Call∑ c=1 e(ϕ T c −ϕ T k )z̃k+(bc−bk) )] 6 1 Cold Cold∑ k=1 log ( Ez̃k [Call∑ c=1 e(ϕ T c −ϕ T k )z̃k+(bc−bk) ]) = 1 Cold Cold∑ k=1 log (Call∑ c=1 ev T c,kµk+(bc−bk)+ γ 2 vTc,kΣkvc,k ) . (6) In above equation, vc,k = ϕc − ϕk. The inequality is based on Jensen’s inequality E[log(X)] 6 logE[X], and the last equality is obtained by using the moment-generating function E[etX ] = etµ+ 1 2σ 2t2 , X v N (µ, σ2), due to the fact that (ϕc − ϕk)z̃k + (bc − bk) is a Gaussian random variable. As can be seen, Eq. (6) is an upper bound of original Lt,old, which provides an elegant and much efficient way to implicitly generate infinite instances in the deep feature space for old classes. The Lt,old in Eq. (6) can be write in the common cross-entropy loss form: Lt,semanAug , Lt,old = 1 Cold Cold∑ k=1 −log ( eϕ T kµk+bk∑Call c=1 e ϕTc µk+bc+ γ 2 v T c,kΣkvc,k ) . (7) Intuitively, Lt,old implicitly performs semantic transformations for µk based on Σk. To maintain the decision boundary, γ should be smaller if the distribution of a class is near the decision boundary; instead, γ should be bigger if the distance is relatively far. We set γ = 2 in our experiments. In addition, we can observe that when γ = 0, only the class means are used for knowledge retention. Discussion. (1) Although the derivation of the upper bound in Eq. (6) is similar with ISDA [22], both our motivation and the way to leverage semanAug are different from ISDA. When learning new classes, we only apply semanAug for the class mean of each old class based on the memorized distribution information. While ISDA applies semanAug on all the training samples to improve generalization in standard supervised learning. In addition, a crucial step in ISDA is to estimate the mean and covariance matrix of each class in an online manner. Differently, semanAug is naturally suitable for Class-IL, since the distribution of old classes can be estimated with all training samples at the end of each learning stage. (2) Using previous class statistics for IL has also been explored in IL2M [55]. However, our method differs from IL2M in both the statistics information and the way to leverage them. First, The class statistics in IL2M is the prediction score of the classifier, while ours is the class distribution statistics in the deep feature space. Second, IL2M uses the class statistics to calibrate the prediction of a continual learner in a post-processing manner, while our method leverage the statistics to automatically learn a balanced classifier. 3.3 The Dual Augmentation Learning Framework With classAug for representation bias and semanAug for classifier bias, Figure 4 describes the learning process of the dual augmentation framework (IL2A). We also use the well-known knowledge distillation (KD) [19] for two reasons. Firstly, classAug and KD are complementary and focus on different aspect of learning representation. Secondly, KD can reduce the change of feature extractor, which is crucial for semanAug because it implicitly generate instances in the deep feature space from old distribution. The total learning objective at each stage t is as following: Lt = Lt,new + αLt,semanAug + βLt,kd, (8) where α and β are two hyper-parameters. Lt,new and Lt,semanAug are shown in Eq. (5) and Eq. (7), respectively. Lt,kd = 1nt ∑nt i=1 ‖fθt−1(xi) − fθt(xi)‖. Note that Lt,new and Lt,semanAug are applied to both the original and synthesized samples. Algorithm 1 presents the pseudo code of IL2A. 4 Experiments 4.1 Evaluation Protocol Algorithm 1: IL2A: Dual augmentation algorithm Randomly initialize Θ0 = {θ0,ϕ0}; S0 = ∅; foreach incremental stage t ∈ {1, ..., T } do Input: model Θt−1, data Dt = {(xi, yi)}nti=1; Output: model Θt; Θt ← Θt−1; Dt,aug = {(x′i, y′i)} n′t i=1 via classAug; add class nodes for augmented classes; if t = 1 then train Θt by minimizing L(gϕ(fθ(x′)), y′); else train Θt by minimizing Eq. (8); s← compute {µ,Σ} for each class in Dt; St ← St−1 ∪ s; remove augmented class nodes in classifier; Datasets. We perform our experiments on CIFAR-100 [49] and Tiny-ImageNet [56]. A common setting is to train the model on half of classes for first task, and equal classes in the remaining incremental steps. Based on this, we split the CIFAR-100 dataset in different settings: 50 + 5× 10, 50+10×5, 40+20×3. For instance, 50+ 10×5 represents that the first task contains 50 classes and there are 5 classes for the following 10 tasks. Similarly, the settings for Tiny-ImageNet are 100+5×20, 100+ 10×10 and 100+20×5. Intuitively, more classes in each tasks requires the model to learn a harder problem for each task, while increasing the length of the task sequence challenges the model’s retention. Implementation Details. In our experiments, we follow [44] to utilize the ResNet-18 [1] as our base architecture, and train it from scratch in each experiment. All models are trained using Adam [57] optimizer with an initial learning rate of 0.001 for 100 epochs with the mini-batch size of 64. The learning rate is reduced by a factor of 10 at 45 and 90 epochs. We use the same hyper-parameter value for all experiments. Specifically, we set α = 10 and β = 10 in Eq. (8). The number of augmented classes (i.e. The number of augmented classes (i.e., m) depends on the number of (original) classes at current incremental step. Taking CIFAR-100 as an example, the m is 45 for 5 phases setting where each incremental step has 10 classes; and m is 10 for 10 phases setting where each incremental step has 5 classes. At the end of each incremental stage, we evaluate the model on all seen classes after removing the class nodes of the m augmented classes in the classifier. Our code is available at https://github.com/Impression2805/IL2A. Comparison Methods. Our method (IL2A) does not store any old samples for replay when learning new classes. Therefore, we first compare IL2A with several non-exemplar based approaches: MAS [16], LwF-MC [13], MUC [58], LwM [59]. In addition, we also compare with several exemplar based methods such as iCaRL [13], EEIL [18] and LUCIR [19]. Specifically, for the data replay based methods, we follow [13, 19] to store 20 samples for each class using ‘herd’ selection technique [13]. We report the average top-1 accuracy of all previously seen classes up to each incremental step t. For iCaRL, we respectively report its results of CNN predictions and nearest-mean-of-exemplars classification, denoted as iCaRL-CNN and iCaRL-NME. 4.2 Experimental Results Main Results. Comparative results are shown in Figure 5. Firstly, we observe that our method performs much better than non-exemplar based methods such as LwF-MC and MUC in the trend of accuracy curve under different settings. Particularly, the gap appears unbridgeable in the long-step Class-IL setting, e.g., 10 phases and 20 phases. This suggests that only constraining old parameters does not suffice to prevent forgetting. We argue that this is partly due to the unaddressed classifier bias. When compared to representative data replay based methods such as iCaRL, EEIL and LUCIR, our method remarkably shows strong performance without storing old samples. The success of our method can contribute to the proposed classAug and semanAug. Specifically, classAug is applied to new classes of current task, which enables the model to learn more transferable and diverse representations for future classes and in turn, reduces the forgetting of old parameters when learning new classes. While semanAug is applied to old classes of previous tasks, which leverage the valuable distribution information of old classes to learn a unified classifier to connect the classes from different tasks to each other. Ablation Study. To evaluate the effect of each component in IL2A, we perform the ablation study and show the results of 10 phases setting (CIFAR-100) in Table 1. Specifically, the baseline denotes the method that does not generate pseudo-instance using semanAug, but only replays the class-mean of each old class when training new classes. By doing so, we aim to validate the effectiveness of semanAug compared with only replaying class-mean. In summary, we can observe that: (1) Baseline improves the performance of KD significantly. (2) SemanAug improves the performance of baseline from 34.71% to 42.09%. Those results indicate the effect of the distribution information for maintaining old knowledge in Class-IL. (3) ClassAug also has remarkably effect on baseline, and (4) the performance can be further improved by combing with semanAug, which indicates that those two modules are complementary. Similar results are observed in other settings of CIFAR-100 and Tiny-ImageNet datasets. (5) As for the computational complexity, classAug involves input level sample mixing and the augmented samples are fed to feature extractor. Differently, semanAug performs implicit old instance generation in the deep feature space. Therefore, semanAug is cheaper compared with classAug from the computation perspective. 4.3 Further Analysis ClassAug Improves both Plasticity and Stability in Class-IL. To analyze the effectiveness of classAug more concretely, we explore how it affects the new tasks accuracy (↑) and average forgetting (↓) (CIFAR-100, 10 phases setting). Average forgetting [60] is defined to estimate the forgetting of previous tasks. The forgetting measure f ik of the i-th task after training k-th task is defined as f ik = max t∈1,...,k−1 (at,i − ak,i),∀i < k, in which am,n is the accuracy of task n after training task m. The average forgetting measure Fk is then defined as Fk = 1k−1 ∑k−1 i=1 f i k. Intuitively, new task accuracy can be viewed as the plasticity of the incremental learner and the average forgetting can be viewed as the stability of the incremental learner. Figure 6 (a) and (b) report the results, from which we see that classAug simultaneously improves the new task accuracy and reduces the average forgetting. Specifically, the significant improvement on new task accuracy implies that the model training with classAug is a good initialization for the following tasks. Consequently, classAug is effective to improve the trade-off between plasticity and stability of a continual learner. Compare ClassAug with Other Regularizers. We compare the proposed classAug with Mixup and LS in Figure 6 (c), where the baseline (with semanAug) represents our IL2A without using classAug. As can be seen, Mixup and LS have negative effect on the final accuracy. This phenomenon could be interpreted based on the analysis in Section 3.1.1 and Figure 2 (b). Specifically, those regularizers result in more compressed representations, damaging the transferability of the representations. Besides, the label smoothing strategy also affects the weights of old classes in the classifier, thus increasing the classifier bias. Similar results have also been reported in [61]. Discussion of Covariance Matrix. In our main experiments, we use the original covariance matrix for semanAug. However, storing the original covariance matrix might be inefficient when the matrix dimension is large. An alternative way is to only store the elements on the diagonal, which could greatly reduce the cost of memory. Figure 7 also reports the results of using the diagonal covariance matrix. Under different settings, using the original covariance matrix is slightly better than the diagonal form. This is reasonable because the original covariance matrix stores more distribution information of old classes. However, using the diagonal covariance matrix would be more memory-efficient in practice. ClassAug Improves Confidence Reliability. During continuous use of a machine learning system in open-world applications, there are mainly three key steps [62]. The first step is out-of-distribution (OOD) detection [63], which requires the system to detect unknown samples from novel classes. The second step is to label the collected unknown samples by humans or automatic algorithms [64]. Finally, the system must scale and adapt incrementally to learn the novel classes, which is the Class-IL problem studied in this paper. Recently studies found that DNNs are overconfident for their predictions [63, 65], lacking the ability to detect samples from unknown classes. In real-world applications, we expect a continual learner has good OOD detection ability. We explore the OOD detection ability of the proposed classAug. Concretely, we train a ResNet-18 on CIFAR-10, and the test samples from CIFAR-10 are in-distribution. For OOD examples, we test on MNIST [66], Fashion-MNIST [67], LSUN (resized) [68] and Tiny-ImageNet (resized). As shown in Table 2, classAug noticeably improves the OOD detection performance of baseline [63] on commonly used metrics such as AUROC, AUPR-In and AUPR-Out [63]. By recognizing synthetic samples, DNNs could learn more robust and transferable representations which could be generalized to OOD samples. Moreover, as shown in Table 2, Mixup sometimes damages the performance of OOD detection, which further demonstrates the superiority of classAug. 5 Conclusion In this paper, we propose a simple and effective dual augmentation framework to address the representation bias and classifier bias in Class-IL. We first investigate the transferability (or forgetting) of representations via spectral decomposition, which motivates us to propose classAug that can learn transferable, diverse and less compact representations for IL. Furthermore, we propose to use semanAug to implicitly generate infinite instances of old classes in the deep feature space during jointly learning of the unified classifier. Experiments show that our method could achieve remarkable performance compared with state-of-the-art Class-IL methods. Future works will consider the dual augmentation framework for more challenging scenarios like Class-IL with distribution shift and OOD data, few-shot Class-IL, and federated incremental learning. Acknowledgements This work has been supported by the National Key Research and Development Program under Grant No. 2018AAA0100400, the National Natural Science Foundation of China (NSFC) grants U20A20223, 61633021, 62076236, 61721004, the Key Research Program of Frontier Sciences of CAS under Grant ZDBS-LY-7004, and the Youth Innovation Promotion Association of CAS under Grant 2019141.
1. What is the main contribution of the paper on non-replay class-incremental learning? 2. What are the strengths of the proposed approach, particularly in its simplicity and performance? 3. What are the weaknesses of the paper regarding the clarity of certain concepts and comparisons with other works? 4. How does the reviewer assess the novelty and relevance of the proposed approach in continual learning? 5. Are there any minor issues or typos that need to be addressed in the paper?
Summary Of The Paper Review
Summary Of The Paper The authors propose a non-replay class-incremental learning method that works by combining input mixup with pseudo-replay at feature level. The authors motivate this method by the means of a simple study on how feature space eigenvectors change while training. The proposed approach is evaluated against a collection of current continual learning methods and shows competitive performance. Review In general, I found the technical core of this work to be very interesting: I found semanAug to be a simple yet very interesting approach. Recent Continual Learning literature often disregards feature pseudo-rehearsal to focus instead on input replay or generation. According to the results shown by the authors, this approach is deserving of further exploration. The overall baseline proposed by the authors is very simple and yet manages to show remarkable performance, given its lack of a memory buffer. It is very remarkable to see it performing on par with memory-based methods, which up to now are generally regarded as the only viable solution for class-incremental learning. However, this work is characterized by several weaknesses, that I feel need to be addressed to increase its potential: In the introduction, the authors make a distinction between classifier and representation bias. While the latter clearly refers to the mismatch between the features learned in past tasks vs the ones needed for new tasks, I cannot clearly understand what the authors mean by classifier bias. The definition provided at lines 32-33 seems to me a general description of the stability-plasticity dilemma, the issue at the core of continual learning and widely debated in literature [11, 17 in the paper]. I do not see this as a bias to be solved, but rather as a generally valid description of what happens when training a neural network on incremental data. Along the same line, the authors claim at line 128 that setting a trade-off between plasticity and stability hinders learning in the long run. Again, I see this as a choice that need to be made and that is not negative per se: whether it hinders training only depends on the produced amount of stability. The authors claim at lines 44-45 that “learning diverse and transferable representation is an important requirement […] which has been ignored by previous works”. I do not find this statement to be accurate: finding suitable and transferable representations has always been one of the cornerstones of continual learning approaches [13, 28 in the paper, Joseph et al. 2020, Chaudhry et al. 2020] This paper adopts an unclear position towards Replay-based methods: on the one hand, the authors claim that they are deliberately targeting non-exemplar-based continual learning, on the other hand, they only adopt replay-based method for their preliminary experiment in section 3.1 and compare IL2A with several replay methods in section 4, so they are not limiting themselves to the aforementioned setting. Even though the method proposed by the authors requires retaining a previous snapshot of the model (which amounts to a memory footprint that may be comparable with replay methods), their non-exemplar-based proposal performing on par with rehearsal approaches is a significant result. However, I wonder if their proposed approach can be successfully coupled with replay and get to even better performance. Indeed, adding such a simple additional experiment could make the whole work even more relevant. A continual learning work that shares some similarities with the authors’ proposal is IL2M (Belouadah et al., 2019), which also makes use of previous class statistics to rectify the prediction scores and prevent representation bias. I think that adding a direct comparison with how this principle is differently exploited in this work could provide a comprehensive picture for the readers. I found the experimental sections to be often unclear so I strongly suggest that the authors take further steps to improve it, with particular reference to the following points: I could not understand the meaning of the numbers contained in Table 1. Are they average accuracy values or incremental accuracy values? Are these numbers referring to different experiment (50-split dataset, 60-split dataset, etc.) or do they refer to the same experiment? What dataset is being used? What does “eigenvalues of covariance” mean at line 182? How exactly is the intrinsic dimension of features defined at line 188? When introducing a specific approach for evaluation, it could be useful to explicitly present it to the reader or at least introduce relevant references. Running continual learning experiments is often a complex matter, since there exist different experimental settings which are often incompatible. In order to evaluate whether experiments are fairly conducted, since the authors did not submit the code for review, but state they will publish it upon acceptance, I need more details on the way experiments were conducted. What codebase was used for the experiments, a new one created from scratch or was it based on the code from other works? Are all results computed anew for all methods or taken from other works? How was hyper-parameter selection conducted for the competitors, were they tuned at all or were original hyperparameters used? In the ablation study, I could not clearly understand how the baseline is constructed: how is the class-mean of old classes used to regularize the classifier (line 311)? Does this mean that only part of the semanAug loss is used? I could not understand this from the text. How is the forgetting measure at line 324 defined? There are several instances of such measure in CL literature, so I would need a citation to understand which one the authors are referring to. The following minor issues did not affect my evaluation directly, but I list them for the authors to consider: I believe that adding variance values for the presented results (at least in the supplementary material) would be very useful to further simplify the understanding of the presented data. Since the denominator in equation 3 is 1, the equation could be simplified. In line 202, the authors define [56] “a very nice recent paper”, I would recommend keeping a neutral tone instead. The comic sans typeface was used in most presented figures, which is unusual in scientific papers. A common sans-serif font would probably be more appropriate. There are several typos that need fixing: Line 39: learning, and → learning and Line 49: is benefited from → benefits from Line 62: to storing → to store / storing Line 96: Classical strategies commonly synthetic → verb is missing Line 99: in complementary → in complement Line 129: our high idea → our high-level idea Line 136: naturally rises → naturally raises Line 140: How the regularization affect → How does the regularization affect Line 143: are tend to be forgetting → tend to be forgotten Line 149: is donated as → is denoted as Line 161: two groups eigenvectors → two groups of eigenvectors Line 185: We note that simimlar → We note that a similar Line 236: tast samples → task samples Line 236: wrong class → wrong classes Line 237: high level idea to → high-level idea is to Caption of figure 4: hans → hand Line 266: differnt → different Line 274: rest incremental step → remaining incremental steps Line 300: bais → bias Line 306: task → tasks Line 310: method that using → method using Line 311: and use the class → and class (Joseph et al., 2020) Meta-consolidation for continual learning, NeurIPS 2020 (Chaudhry et al., 2020) Continual Learning in Low-rank Orthogonal Subspaces, NeurIPS2020 (Belouadah et al., 2019) Il2m: Class incremental learning with dual memory, ICCV 2019
NIPS
Title Attention in Convolutional LSTM for Gesture Recognition Abstract Convolutional long short-term memory (LSTM) networks have been widely used for action/gesture recognition, and different attention mechanisms have also been embedded into the LSTM or the convolutional LSTM (ConvLSTM) networks. Based on the previous gesture recognition architectures which combine the threedimensional convolution neural network (3DCNN) and ConvLSTM, this paper explores the effects of attention mechanism in ConvLSTM. Several variants of ConvLSTM are evaluated: (a) Removing the convolutional structures of the three gates in ConvLSTM, (b) Applying the attention mechanism on the input of ConvLSTM, (c) Reconstructing the input and (d) output gates respectively with the modified channel-wise attention mechanism. The evaluation results demonstrate that the spatial convolutions in the three gates scarcely contribute to the spatiotemporal feature fusion, and the attention mechanisms embedded into the input and output gates cannot improve the feature fusion. In other words, ConvLSTM mainly contributes to the temporal fusion along with the recurrent steps to learn the long-term spatiotemporal features, when taking as input the spatial or spatiotemporal features. On this basis, a new variant of LSTM is derived, in which the convolutional structures are only embedded into the input-to-state transition of LSTM. The code of the LSTM variants is publicly available2. 1 Introduction Long short-term memory (LSTM) [1] recurrent neural networks are widely used to process sequential data [2]. Several variants of LSTM have been proposed since its inception in 1995 [3]. By extending the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions, Shi et al. [4] proposed the convolutional LSTM (ConvLSTM) network to process sequential images for precipitation nowcasting. Thereafter, ConvLSTM has been used for action recognition [5, 6], gesture recognition [7–9] and in other fields [10–12]. When LSTM is used to process video or sequential images, the spatial features of two-dimensional convolutional ∗Equal Contribution 2https://github.com/GuangmingZhu/AttentionConvLSTM 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. neural networks (2DCNN) are generally vectorized before feeding them as input of LSTM [13, 14]. However, the two-dimensional spatial feature maps can be fed into ConvLSTM directly, without the loss of the spatial correlation information. For example, the spatial feature maps of AlexNet/VGG-16 [5, 10] or the spatiotemporal feature maps of three-dimensional CNN (3DCNN) [7, 8] are used as input of ConvLSTM. ConvLSTM was originally proposed to take images as input for precipitation nowcasting, the spatial convolutions are therefore necessary to learn the spatiotemporal features. However, how much do the convolutional structures of ConvLSTM contribute to the feature fusion when ConvLSTM takes as input the spatial convolutional features instead of images? Is it necessary to have different gate values for each element of the feature maps in the spatial domain? The effect of the convolutional structures in ConvLSTM can be analyzed in three cases. (a) ConvLSTM takes original images as input. In this case, the convolutional structures are crucial to learn the spatiotemporal features, as verified in [4]. (b) ConvLSTM takes the feature maps of 2DCNN as input. In this case, the effect of the convolutional structures is not always remarkable. Intuitively, the three gates of ConvLSTM can be viewed as the weighting mechanism for the feature map fusion. However, the different gates values for each element of the feature maps in the spatial domain seemingly do not have the function of spatial attention. Therefore, the soft attention mechanism [15] is additionally introduced into the input of ConvLSTM in [5], in order to make ConvLSTM focus on the noticeable spatial features. The improvement (as illustrated in Table 1 of [5]) caused by the attention mechanism on the input can also verify the above claim in some degree. (c) ConvLSTM takes the feature maps of 3DCNN as input. Since the 3DCNN networks have learnt the spatiotemporal features, the gates of ConvLSTM are more unlikely to have the function of spatial attention. The last case will be analyzed thoroughly in this paper. Based on our previous published "3DCNN+ConvLSTM+2DCNN" architecture [8], we construct a preliminary "Res3D+ConvLSTM+MobileNet" architecture and derive four variants of the ConvLSTM component. In the preliminary "Res3D+ConvLSTM+MobileNet" architecture, the blocks 1-4 of Res3D [16] are used first to learn the local short-term spatiotemporal feature maps which have a relatively large spatial size. Then, two ConvLSTM layers are stacked to learn the global long-term spatiotemporal feature maps. Finally, parts of MobileNet [17] are used to learn deeper features based on the learnt two-dimensional spatiotemporal feature maps. The Res3D and MobileNet blocks are fixed, and the ConvLSTM component is modified to derive four variants: (a) Removing the convolutional structures of the gates by performing the spatial global average pooling on the input and the hidden states ahead. This means that the convolutional operations in the three gates are reduced to the fully-connected operations. The convolutional structures for the input-to-state transition are reserved to learn the spatiotemporal features. (b) Applying the soft attention mechanism to the input (i.e., the feature maps of the Res3D block) of ConvLSTM. (c) Reconstructing the input gate using the channel-wise attention mechanism. (d) Reconstructing the output gate using the channel-wise attention mechanism. We do not re-evaluate the cases that ConvLSTM takes as input images or features of 2DCNN in this paper, since the experiments in [4] and [5] can demonstrate the aforementioned claims. We focus on the evaluation of the third case on the large-scale isolated gesture datasets Jester [18] and IsoGD [19], since the "3DCNN+ConvLSTM+2DCNN" architecture was originally proposed for gesture recognition. Experimental results demonstrate that neither the convolutional structures in the three gates of ConvLSTM nor the extra spatial attention mechanisms contribute in the performance improvements, given the fact that the input spatiotemporal features of 3DCNN have paid attention to the noticeable spatial features. The exploring on the attention in ConvLSTM leads to a new variant of LSTM, which is different from the FC-LSTM and ConvLSTM. Specifically, the variant only brings the spatial convolutions to the input-to-state transition, and keeps the gates the same as the gates of FC-LSTM. 2 Attention in ConvLSTM To ensure the completeness of the paper, the preliminary "Res3D+ConvLSTM+MobileNet" architecture is first described. Then, the variants of ConvLSTM are elaborated and analyzed. 2.1 The preliminary architecture Two-streams or 3DCNN based networks are widely used for action recognition, such as the famous TSN [20], C3D [21], Res3D [16], and I3D [22] networks. Gesture recognition is different from action recognition. You cannot tell the categories of the dynamic gestures when you only look at an image once. But, you may tell when you just look at an image of actions, under the hints of the backgrounds, objects and postures. Therefore, the aforementioned famous networks cannot produce the state-of-the-art performances on gesture recognition, without including multimodal fusion. Gestures focus on the local information of hands and the global motions of arms. Thus, we use a shallow 3DCNN to learn the local short-term spatiotemporal features first. The 3DCNN block does not need to be deep, since it focuses on the local features. Therefore, the modified blocks 1-4 of Res3D are used. The temporal duration (or spatial size) of the outputted feature maps is only shrunk by a ratio of 2 (or 4), compared with the inputted images. Then, a two-layer ConvLSTM network is stacked to learn the long-term spatiotemporal feature maps. The ConvLSTM network does not shrink the spatial size of the feature maps. Thus, the spatiotemporal feature maps still have a relative large spatial size. The top layers of MobileNet, whose inputs have the same spatial size, are further stacked to learn deeper features. The comparison with the aforementioned famous networks will be given in the experimental part to demonstrate the advantages of the architecture (as displayed in Fig. 1). 2.2 The variants of ConvLSTM Formally, ConvLSTM can be formulated as: it = σ(Wxi ∗Xt +Whi ∗Ht−1 + bi) (1) ft = σ(Wxf ∗Xt +Whf ∗Ht−1 + bf ) (2) ot = σ(Wxo ∗Xt +Who ∗Ht−1 + bo) (3) Gt = tanh(Wxc ∗Xt +Whc ∗Ht−1 + bc) (4) Ct = ft ◦ Ct−1 + it ◦Gt (5) Ht = ot ◦ tanh(Ct) (6) where σ is the sigmoid function, Wx∼ and Wh∼ are 2-d convolution kernels. The input Xt , the cell state Ct , the hidden state Ht, the candidate memory Gt, and the gates it, ft, ot are all 3D tensors. The symbol "*" denotes the convolution operator, and "o" denotes the Hadamard product. The input Xt has a spatial size of W ×H with Cin channels, and ConvLSTM has a convolutional kernel size of K ×K with Cout channels. Thus, the parameter size of ConvLSTM can be calculated as3: ParamConvLSTM = K ×K × (Cin + Cout)× Cout × 4 (7) The parameter size of ConvLSTM is very large, partly due to the convolutional structures. It can be concluded from Eqs. (1)-(6) that the gates it, ft, ot have a spatial size of W ×H with Cout channels4. It means that the three gates have independent values for each element of the feature maps in the cell state and the candidate memory. In this case, can ConvLSTM focus on the noticeable spatial regions with the help of different gate values in the spatial domain? In order to provide an answer and remove any doubt, four variants of ConvLSTM are constructed as follows (as illustrated in Fig. 2). 3The biases are ignored for simplicity. 4It is assumed that the convolutional structures have the same-padding style. (a) Removing the convolutional structures of the gates Given the local spatiotemporal features of the 3DCNN block, it can be considered that the 3DCNN block has paid attention to the noticeable spatial regions where there is valuable spatiotemporal information. Therefore, the ConvLSTM block can just focus on the spatiotemporal feature fusion along with the recurrent steps. The gate values are only needed to be calculated for each feature map of the states, not for each element. Therefore, a global average pooling is performed on the input features and the hidden states to reduce the spatial dimension, so that fully-connected operations can be performed instead of convolutions in the gates. The variant of ConvLSTM can be formulated as: Xt = GlobalAveragePooling(Xt) (8) Ht−1 = GlobalAveragePooling(Ht−1) (9) it = σ(WxiXt +WhiHt−1 + bi) (10) ft = σ(WxfXt +WhfHt−1 + bf ) (11) ot = σ(WxoXt +WhoHt−1 + bo) (12) Gt = tanh(Wxc ∗Xt +Whc ∗Ht−1 + bc) (13) Ct = ft ◦ Ct−1 + it ◦Gt (14) Ht = ot ◦ tanh(Ct) (15) The gates it, ft and ot are all one-dimensional vectors, so that the elements in each feature map are weighted by the same gate value in Eqs. (14)-(15). The convolutional structures in the three gates are reduced to fully-connected operations. The convolutional structures for the input-to-state transition (as in Eq. (13)) are reserved for the spatiotemporal feature fusion. In order to reduce the numbers of parameters of the input-to-state transition, the depthwise separable convolutions [23] are used. This reduces the parameter size of the variant of ConvLSTM to ParamConvLSTMva = (K ×K + Cout × 4)× (Cin + Cout) (16) Three more variants are constructed based on variant (a), in order to verify whether the spatial attention can improve the performances. (b) Applying the attention mechanism to the inputs By referring to [5], we apply the spatial attention mechanism to the inputs before the operations of Eqs.(8)-(15). Formally, the attention mechanism can be formulated as: Zt =Wz ∗ tanh(Wxa ∗Xt +Wha ∗Ht−1 + ba) (17) Aijt = p(attij |Xt, Ht−1) = exp(Zijt )∑ i ∑ j exp(Z ij t ) (18) X̃t = At ◦Xt (19) where At is a 2-d score map, and Wz is the 2-d convolution kernel with a kernel size of K ×K × Cin × 1. The variant (b) can be constructed by replacing Xt in Eqs.(8)-(15) with X̃t. The parameter size of this variant can be calculated as ParamConvLSTMvb = ParamConvLSTMva+K×K× (Cin+Cout× 2)+ (Cin+Cout)×Cout (20) (c) Reconstructing the input gate using the channel-wise attention Both the gate and the attention mechanisms need to perform convolutions on the input and the hidden states, as expressed in Eqs. (1)-(3) and Eq. (17). Does this mean that the gate mechanism has the function of attention implicitly? The answer is no. The independent gate values in the spatial domain of the feature maps cannot ensure the attention effect as expressed in Eq. (18). Therefore, we reconstruct the input gate according to the attention mechanism. The sigmoid activation function makes the gate values fall in the range 0-1. The division by the sum in Eq. (18) results in attention scores whose sum is 1 in each feature channel. This means that the attention scores in each feature channel may be far less than 1, and far less than most of the normal gate values in other gates, given the large spatial size of the input feature maps. Therefore, the attention mechanism needs to be modified to match the range of the sigmoid function in the gates. Formally, the input gate can be reformulated as: Zt =Wi ∗ tanh(Wxi ∗Xt +Whi ∗Ht−1 + bi) (21) Aijt (c) = exp(Zijt (c)) max i,j exp(Zijt (c)) (22) it = {Aijt (c) : (i, j, c) ∈ RW×H×Cout} (23) where Wi is a 2-d convolution kernel with a kernel size of W ×H and a channel num of Cout. The "max i,j exp(Zijt (c))” in Eq. (22) corresponds to the maximum element chosen within the channel c of Zt. In other words, the normalization in Eq. (22) is performed channel-wise. The division by the maximum value instead of the sum ensures that the attention scores are distributed in the range of 0-1. Variant (c) of ConvLSTM can be constructed by replacing the input gate of variant (a) with the new gate expressed by Eqs. (21)-(23). The parameter size of this variant can be calculated as ParamConvLSTMvc = ParamConvLSTMva +K ×K × (Cin +Cout × 2) +Cout ×Cout (24) (d) Reconstructing the output gate using the channel-wise attention Variant (b) of ConvLSTM applies the attention mechanism on the input feature maps, while variant (c) applies the attention mechanism on the candidate memory. Finally, variant (d) of ConvLSTM is constructed by applying the attention mechanism on the cell state. In other words, the output gate is reconstructed in the same way as the input gate in variant (c). The expressions are similar as in Eqs. (21)-(23), and they are thus omitted for simplicity. 3 Experiments The case in which ConvLSTM takes features from 2DCNN as input has been evaluated in [5], and the improvement (as illustrated in Table 1 of [5]) caused by the attention mechanism on the input features can indicate, in some degree, that the convolutional structures in the gates cannot play the role of spatial attention. Due to page restrictions, this paper only focuses on the evaluation of the case in which ConvLSTM takes features from 3DCNN as input. As aforementioned, the "3DCNN+ConvLSTM+2DCNN" architecture was originally proposed for gesture recognition [8]. Therefore, the proposed variants of ConvLSTM are evaluated on the large-scale isolated gesture datasets Jester [18] and IsoGD [19] in this paper. 3.1 Datasets Jester[18] is a large collection of densely-labeled video clips. Each clip contains a pre-defined hand gesture performed by a worker in front of a laptop camera or webcam. The dataset includes 148,094 RGB video files of 27 kinds of gestures. It is the largest isolated gesture dataset in which each category has more than 5,000 instances on average. Therefore, this dataset was used to train our networks from scratch. IsoGD[19] is a large-scale isolated gesture dataset which contains 47,933 RGB+D gesture videos of 249 kinds of gestures performed by 21 subjects. The dataset has been used in the 2016 [24] and 2017 [25] ChaLearn LAP Large-scale Isolated Gesture Recognition Challenges. This paper has the benefit that results are compared with the state-of-the-art networks used in the challenges. Different multi-modal fusion methods were used by the teams in the challenges. In this paper, only the evaluation on each modality is performed (without multi-modal fusion) to verify the advantages of the different deep architectures. 3.2 Implementation details The base architecture has been displayed in Fig. 1. The Res3D and MobileNet components are deployed from their original versions, except for the aforementioned modifications in Section 2.1. These two components are fixed among the variants. The filter numbers of ConvLSTM and the variants are all set to 256. The networks using the original ConvLSTM or the variants are first trained on the Jester dataset from scratch, and then fine-tuned using the IsoGD dataset to report the final results. For the training on Jester, the learning rate follows a polynomial decay from 0.001 to 0.000001 within a total of 30 epochs. The input is 16 video clips, and each clip contains 16 frames with a spatial size of 112× 112. The uniform sampling with the temporal jitter strategy [26] is utilized to preprocess the inputs. During the fine-tuning with IsoGD, the batch size is set to 8, the temporal length is set to 32, and a total of 15 epochs are performed for each variant. The top-1 accuracy is used as the metric of evaluation. Stochastic gradient descent (SGD) is used for training. 3.3 Explorative study The networks which use the original ConvLSTM or the four variants as the ConvLSTM component in Fig. 1 are evaluated on the Jester and IsoGD datasets respectively. The evaluation results are illustrated in Table 1. The evaluation on Jester has almost the same accuracy except for variant (b). The similar recognition results on Jester may be caused by the network capacity or the distinguishability of the data, because the validation has a comparable accuracy with the training. The lower accuracy of variant (b) may indicate the uselessness of the extra attention mechanism on the inputs, since the learnt spatiotemporal features of 3DCNN have already paid attention to the noticeable spatial regions. The lower accuracy of the variant (b) on IsoGD can also testify this conclusion. The lower accuracy may be due to the additional optimization difficulty caused by the extra multiplication operations in the attention mechanism. The comparison on IsoGD shows that variant (a) is superior to the original ConvLSTM, regardless of the recognition accuracy or the parameter size and the computational consumption. The reduction of the convolutional structures in the three gates will not reduce the network capacity, but can save memory and computational consumption significantly. The specific attention mechanism embedded in the input and output gates cannot contribute to the feature fusion, but it just brings extra memory and computational consumption. These observations demonstrate that the ConvLSTM component only needs to take full use of its advantages on the long-term temporal fusion, when the input features have learnt the local spatiotemporal information. LSTM/RNN has its superiority on the long sequential data processing. The extension from LSTM to ConvLSTM can only increase the dimensionality of the states and memory, and keep the original gate mechanism unchanged. This evaluation leads to a new variant of LSTM (i.e., variant (a) of ConvLSTM), in which the convolutional structures are only introduced into the input-to-state transition, and the gates still have the original fully-connected mechanism . The added convolutional structures make the variant of LSTM capable of performing the spatiotemporal feature fusion. The gate mechanism still sticks to its own responsibility and superiority for the long-term temporal fusion. 3.4 Comparison with the state-of-the-art Table 2 shows the comparison results with the state-of-the-art networks on IsoGD. The 2DCNN networks demonstrate their unbeatable superiority on the image-based applications, and also show their ability for action recognition with the help of the specific backgrounds and objects. But, they do not keep their unbeatable performances in the case of gesture recognition, where the fine-grained spatiotemporal features of hands and the global motions of arms do matter. The 3DCNN networks are good at the spatiotemporal feature learning. But, the weakness on long-term temporal fusion restricts their capabilities. The "3DCNN+ConvLSTM+2DCNN" architecture takes full use of the advantages of 3DCNN, ConvLSTM and 2DCNN. The proposed variant (a) of ConvLSTM further enhances ConvLSTM’s ability for spatiotemporal feature fusion, without any additional burden. Therefore, the best recognition results can be obtained by taking full use of the intrinsic advantages of the different networks. Although the reference [27] reports the state-of-the-art performance on IsoGD, the high accuracy is achieved by fusing 12 channels (i.e., global/left/right channels for four modalities). The proposed network obtains the best accuracy on each single modality. This exactly demonstrates the superiority of the proposed architecture. 3.5 Visualization of the feature map fusion The reduction of the convolutional structures of the three gates in ConvLSTM brings no side effects to spatiotemporal feature map fusion. Fig. 3 displays an example of visualization of the feature map fusion along with the recurrent steps. It can be seen from the heat maps that the most active regions just reflect the hands’ motion trajectories. These are similar to the attention score maps. This also indicates that the learnt spatiotemporal features from 3DCNN have paid attention to the noticeable spatial regions, and no extra attention mechanism is needed when fusing the long-term spatiotemporal feature maps using ConvLSTM. The reduction of the convolutional structures of the three gates in ConvLSTM makes the variant more applicable for constructing more complex deep architectures, since this variant has fewer parameters and computational consumption. 4 Conclusion The effects of attention in Convolutional LSTM are explored in this paper. Our evaluation results and previous published results show that the convolutional structures in the gates of ConvLSTM do not play the role of spatial attention, even if the gates have independent weight values for each element of the feature maps in the spatial domain. The reduction of the convolutional structures in the three gates results in a better accuracy, a lower parameter size and a lower computational consumption. This leads to a new variant of LSTM, in which the convolutional structures are only added to the input-to-state transition, and the gates still stick to their own responsibility and superiority for long-term temporal fusion. This makes the proposed variant capable of effectively performing spatiotemporal feature fusion, with fewer parameters and computational consumption. Acknowledgments This work is partially supported by the National Natural Science Foundation of China under Grant No.61702390, and the Fundamental Research Funds for the Central Universities under Grant JB181001.
1. What is the focus of the paper regarding ConvLSTM and its application? 2. What are the strengths of the proposed architecture, particularly in reducing parameters and computation? 3. What are the weaknesses of the paper, especially regarding attention and its relevance? 4. Do you think the paper could be improved by including more tasks, such as action recognition, and providing more comprehensive related work?
Review
Review Summary - The paper explores the effect of attention and convolutional structures in a convLSTM, for the specific case where the ConvLSTM takes spatio-temporal feature maps extracted by a 3D-CNN, and the task is gesture recognition. Experiments are shown on four variants, from which it is discerned that the best architecture is a convLSTM that uses fully connected layers for the gates and convolutional layers for the input-to-state transition, thereby decreasing number of parameters and computation. Strengths - -> Paper is well written, easy to follow, and the figures are good. -> Results obtained using variant a) on gesture recognition are very good, and establish that linear gates are sufficient for spatio-temporal fusion. -> The proposed variant a) significantly reduces number of parameters and computation. Weaknesses - -> The problem statement for attention is slightly shaky. It is evident that feeding spatio-temporal features from a 3D-CNN, would diminish the need for an added attention mechanism inside the convLSTM. The earlier works, like VideoLSTM, exploit attention on convolutional feature maps, which is interesting. This exploration of attention on 3D-CNN feature maps, although rigorous, is not highly interesting or informative, in my opinion. -> The paper shows results for the task of gesture recognition. The proposed architecture could have been adapted for the action recognition task and results shown on the popular UCF and HMDB datasets, to strengthen the arguments put forward in the paper. -> Although relevant papers are discussed in the intro, the paper lacks a comprehensive related work section. Overall assessment - The significance of the paper lies in the newly proposed architecture that reduces number of parameters and computation, but is limited from a research perspective. However, it can add to the knowledge of people working on combing CNN and RNN architectures, as the writing is good and experiments are detailed.
NIPS
Title Attention in Convolutional LSTM for Gesture Recognition Abstract Convolutional long short-term memory (LSTM) networks have been widely used for action/gesture recognition, and different attention mechanisms have also been embedded into the LSTM or the convolutional LSTM (ConvLSTM) networks. Based on the previous gesture recognition architectures which combine the threedimensional convolution neural network (3DCNN) and ConvLSTM, this paper explores the effects of attention mechanism in ConvLSTM. Several variants of ConvLSTM are evaluated: (a) Removing the convolutional structures of the three gates in ConvLSTM, (b) Applying the attention mechanism on the input of ConvLSTM, (c) Reconstructing the input and (d) output gates respectively with the modified channel-wise attention mechanism. The evaluation results demonstrate that the spatial convolutions in the three gates scarcely contribute to the spatiotemporal feature fusion, and the attention mechanisms embedded into the input and output gates cannot improve the feature fusion. In other words, ConvLSTM mainly contributes to the temporal fusion along with the recurrent steps to learn the long-term spatiotemporal features, when taking as input the spatial or spatiotemporal features. On this basis, a new variant of LSTM is derived, in which the convolutional structures are only embedded into the input-to-state transition of LSTM. The code of the LSTM variants is publicly available2. 1 Introduction Long short-term memory (LSTM) [1] recurrent neural networks are widely used to process sequential data [2]. Several variants of LSTM have been proposed since its inception in 1995 [3]. By extending the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions, Shi et al. [4] proposed the convolutional LSTM (ConvLSTM) network to process sequential images for precipitation nowcasting. Thereafter, ConvLSTM has been used for action recognition [5, 6], gesture recognition [7–9] and in other fields [10–12]. When LSTM is used to process video or sequential images, the spatial features of two-dimensional convolutional ∗Equal Contribution 2https://github.com/GuangmingZhu/AttentionConvLSTM 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. neural networks (2DCNN) are generally vectorized before feeding them as input of LSTM [13, 14]. However, the two-dimensional spatial feature maps can be fed into ConvLSTM directly, without the loss of the spatial correlation information. For example, the spatial feature maps of AlexNet/VGG-16 [5, 10] or the spatiotemporal feature maps of three-dimensional CNN (3DCNN) [7, 8] are used as input of ConvLSTM. ConvLSTM was originally proposed to take images as input for precipitation nowcasting, the spatial convolutions are therefore necessary to learn the spatiotemporal features. However, how much do the convolutional structures of ConvLSTM contribute to the feature fusion when ConvLSTM takes as input the spatial convolutional features instead of images? Is it necessary to have different gate values for each element of the feature maps in the spatial domain? The effect of the convolutional structures in ConvLSTM can be analyzed in three cases. (a) ConvLSTM takes original images as input. In this case, the convolutional structures are crucial to learn the spatiotemporal features, as verified in [4]. (b) ConvLSTM takes the feature maps of 2DCNN as input. In this case, the effect of the convolutional structures is not always remarkable. Intuitively, the three gates of ConvLSTM can be viewed as the weighting mechanism for the feature map fusion. However, the different gates values for each element of the feature maps in the spatial domain seemingly do not have the function of spatial attention. Therefore, the soft attention mechanism [15] is additionally introduced into the input of ConvLSTM in [5], in order to make ConvLSTM focus on the noticeable spatial features. The improvement (as illustrated in Table 1 of [5]) caused by the attention mechanism on the input can also verify the above claim in some degree. (c) ConvLSTM takes the feature maps of 3DCNN as input. Since the 3DCNN networks have learnt the spatiotemporal features, the gates of ConvLSTM are more unlikely to have the function of spatial attention. The last case will be analyzed thoroughly in this paper. Based on our previous published "3DCNN+ConvLSTM+2DCNN" architecture [8], we construct a preliminary "Res3D+ConvLSTM+MobileNet" architecture and derive four variants of the ConvLSTM component. In the preliminary "Res3D+ConvLSTM+MobileNet" architecture, the blocks 1-4 of Res3D [16] are used first to learn the local short-term spatiotemporal feature maps which have a relatively large spatial size. Then, two ConvLSTM layers are stacked to learn the global long-term spatiotemporal feature maps. Finally, parts of MobileNet [17] are used to learn deeper features based on the learnt two-dimensional spatiotemporal feature maps. The Res3D and MobileNet blocks are fixed, and the ConvLSTM component is modified to derive four variants: (a) Removing the convolutional structures of the gates by performing the spatial global average pooling on the input and the hidden states ahead. This means that the convolutional operations in the three gates are reduced to the fully-connected operations. The convolutional structures for the input-to-state transition are reserved to learn the spatiotemporal features. (b) Applying the soft attention mechanism to the input (i.e., the feature maps of the Res3D block) of ConvLSTM. (c) Reconstructing the input gate using the channel-wise attention mechanism. (d) Reconstructing the output gate using the channel-wise attention mechanism. We do not re-evaluate the cases that ConvLSTM takes as input images or features of 2DCNN in this paper, since the experiments in [4] and [5] can demonstrate the aforementioned claims. We focus on the evaluation of the third case on the large-scale isolated gesture datasets Jester [18] and IsoGD [19], since the "3DCNN+ConvLSTM+2DCNN" architecture was originally proposed for gesture recognition. Experimental results demonstrate that neither the convolutional structures in the three gates of ConvLSTM nor the extra spatial attention mechanisms contribute in the performance improvements, given the fact that the input spatiotemporal features of 3DCNN have paid attention to the noticeable spatial features. The exploring on the attention in ConvLSTM leads to a new variant of LSTM, which is different from the FC-LSTM and ConvLSTM. Specifically, the variant only brings the spatial convolutions to the input-to-state transition, and keeps the gates the same as the gates of FC-LSTM. 2 Attention in ConvLSTM To ensure the completeness of the paper, the preliminary "Res3D+ConvLSTM+MobileNet" architecture is first described. Then, the variants of ConvLSTM are elaborated and analyzed. 2.1 The preliminary architecture Two-streams or 3DCNN based networks are widely used for action recognition, such as the famous TSN [20], C3D [21], Res3D [16], and I3D [22] networks. Gesture recognition is different from action recognition. You cannot tell the categories of the dynamic gestures when you only look at an image once. But, you may tell when you just look at an image of actions, under the hints of the backgrounds, objects and postures. Therefore, the aforementioned famous networks cannot produce the state-of-the-art performances on gesture recognition, without including multimodal fusion. Gestures focus on the local information of hands and the global motions of arms. Thus, we use a shallow 3DCNN to learn the local short-term spatiotemporal features first. The 3DCNN block does not need to be deep, since it focuses on the local features. Therefore, the modified blocks 1-4 of Res3D are used. The temporal duration (or spatial size) of the outputted feature maps is only shrunk by a ratio of 2 (or 4), compared with the inputted images. Then, a two-layer ConvLSTM network is stacked to learn the long-term spatiotemporal feature maps. The ConvLSTM network does not shrink the spatial size of the feature maps. Thus, the spatiotemporal feature maps still have a relative large spatial size. The top layers of MobileNet, whose inputs have the same spatial size, are further stacked to learn deeper features. The comparison with the aforementioned famous networks will be given in the experimental part to demonstrate the advantages of the architecture (as displayed in Fig. 1). 2.2 The variants of ConvLSTM Formally, ConvLSTM can be formulated as: it = σ(Wxi ∗Xt +Whi ∗Ht−1 + bi) (1) ft = σ(Wxf ∗Xt +Whf ∗Ht−1 + bf ) (2) ot = σ(Wxo ∗Xt +Who ∗Ht−1 + bo) (3) Gt = tanh(Wxc ∗Xt +Whc ∗Ht−1 + bc) (4) Ct = ft ◦ Ct−1 + it ◦Gt (5) Ht = ot ◦ tanh(Ct) (6) where σ is the sigmoid function, Wx∼ and Wh∼ are 2-d convolution kernels. The input Xt , the cell state Ct , the hidden state Ht, the candidate memory Gt, and the gates it, ft, ot are all 3D tensors. The symbol "*" denotes the convolution operator, and "o" denotes the Hadamard product. The input Xt has a spatial size of W ×H with Cin channels, and ConvLSTM has a convolutional kernel size of K ×K with Cout channels. Thus, the parameter size of ConvLSTM can be calculated as3: ParamConvLSTM = K ×K × (Cin + Cout)× Cout × 4 (7) The parameter size of ConvLSTM is very large, partly due to the convolutional structures. It can be concluded from Eqs. (1)-(6) that the gates it, ft, ot have a spatial size of W ×H with Cout channels4. It means that the three gates have independent values for each element of the feature maps in the cell state and the candidate memory. In this case, can ConvLSTM focus on the noticeable spatial regions with the help of different gate values in the spatial domain? In order to provide an answer and remove any doubt, four variants of ConvLSTM are constructed as follows (as illustrated in Fig. 2). 3The biases are ignored for simplicity. 4It is assumed that the convolutional structures have the same-padding style. (a) Removing the convolutional structures of the gates Given the local spatiotemporal features of the 3DCNN block, it can be considered that the 3DCNN block has paid attention to the noticeable spatial regions where there is valuable spatiotemporal information. Therefore, the ConvLSTM block can just focus on the spatiotemporal feature fusion along with the recurrent steps. The gate values are only needed to be calculated for each feature map of the states, not for each element. Therefore, a global average pooling is performed on the input features and the hidden states to reduce the spatial dimension, so that fully-connected operations can be performed instead of convolutions in the gates. The variant of ConvLSTM can be formulated as: Xt = GlobalAveragePooling(Xt) (8) Ht−1 = GlobalAveragePooling(Ht−1) (9) it = σ(WxiXt +WhiHt−1 + bi) (10) ft = σ(WxfXt +WhfHt−1 + bf ) (11) ot = σ(WxoXt +WhoHt−1 + bo) (12) Gt = tanh(Wxc ∗Xt +Whc ∗Ht−1 + bc) (13) Ct = ft ◦ Ct−1 + it ◦Gt (14) Ht = ot ◦ tanh(Ct) (15) The gates it, ft and ot are all one-dimensional vectors, so that the elements in each feature map are weighted by the same gate value in Eqs. (14)-(15). The convolutional structures in the three gates are reduced to fully-connected operations. The convolutional structures for the input-to-state transition (as in Eq. (13)) are reserved for the spatiotemporal feature fusion. In order to reduce the numbers of parameters of the input-to-state transition, the depthwise separable convolutions [23] are used. This reduces the parameter size of the variant of ConvLSTM to ParamConvLSTMva = (K ×K + Cout × 4)× (Cin + Cout) (16) Three more variants are constructed based on variant (a), in order to verify whether the spatial attention can improve the performances. (b) Applying the attention mechanism to the inputs By referring to [5], we apply the spatial attention mechanism to the inputs before the operations of Eqs.(8)-(15). Formally, the attention mechanism can be formulated as: Zt =Wz ∗ tanh(Wxa ∗Xt +Wha ∗Ht−1 + ba) (17) Aijt = p(attij |Xt, Ht−1) = exp(Zijt )∑ i ∑ j exp(Z ij t ) (18) X̃t = At ◦Xt (19) where At is a 2-d score map, and Wz is the 2-d convolution kernel with a kernel size of K ×K × Cin × 1. The variant (b) can be constructed by replacing Xt in Eqs.(8)-(15) with X̃t. The parameter size of this variant can be calculated as ParamConvLSTMvb = ParamConvLSTMva+K×K× (Cin+Cout× 2)+ (Cin+Cout)×Cout (20) (c) Reconstructing the input gate using the channel-wise attention Both the gate and the attention mechanisms need to perform convolutions on the input and the hidden states, as expressed in Eqs. (1)-(3) and Eq. (17). Does this mean that the gate mechanism has the function of attention implicitly? The answer is no. The independent gate values in the spatial domain of the feature maps cannot ensure the attention effect as expressed in Eq. (18). Therefore, we reconstruct the input gate according to the attention mechanism. The sigmoid activation function makes the gate values fall in the range 0-1. The division by the sum in Eq. (18) results in attention scores whose sum is 1 in each feature channel. This means that the attention scores in each feature channel may be far less than 1, and far less than most of the normal gate values in other gates, given the large spatial size of the input feature maps. Therefore, the attention mechanism needs to be modified to match the range of the sigmoid function in the gates. Formally, the input gate can be reformulated as: Zt =Wi ∗ tanh(Wxi ∗Xt +Whi ∗Ht−1 + bi) (21) Aijt (c) = exp(Zijt (c)) max i,j exp(Zijt (c)) (22) it = {Aijt (c) : (i, j, c) ∈ RW×H×Cout} (23) where Wi is a 2-d convolution kernel with a kernel size of W ×H and a channel num of Cout. The "max i,j exp(Zijt (c))” in Eq. (22) corresponds to the maximum element chosen within the channel c of Zt. In other words, the normalization in Eq. (22) is performed channel-wise. The division by the maximum value instead of the sum ensures that the attention scores are distributed in the range of 0-1. Variant (c) of ConvLSTM can be constructed by replacing the input gate of variant (a) with the new gate expressed by Eqs. (21)-(23). The parameter size of this variant can be calculated as ParamConvLSTMvc = ParamConvLSTMva +K ×K × (Cin +Cout × 2) +Cout ×Cout (24) (d) Reconstructing the output gate using the channel-wise attention Variant (b) of ConvLSTM applies the attention mechanism on the input feature maps, while variant (c) applies the attention mechanism on the candidate memory. Finally, variant (d) of ConvLSTM is constructed by applying the attention mechanism on the cell state. In other words, the output gate is reconstructed in the same way as the input gate in variant (c). The expressions are similar as in Eqs. (21)-(23), and they are thus omitted for simplicity. 3 Experiments The case in which ConvLSTM takes features from 2DCNN as input has been evaluated in [5], and the improvement (as illustrated in Table 1 of [5]) caused by the attention mechanism on the input features can indicate, in some degree, that the convolutional structures in the gates cannot play the role of spatial attention. Due to page restrictions, this paper only focuses on the evaluation of the case in which ConvLSTM takes features from 3DCNN as input. As aforementioned, the "3DCNN+ConvLSTM+2DCNN" architecture was originally proposed for gesture recognition [8]. Therefore, the proposed variants of ConvLSTM are evaluated on the large-scale isolated gesture datasets Jester [18] and IsoGD [19] in this paper. 3.1 Datasets Jester[18] is a large collection of densely-labeled video clips. Each clip contains a pre-defined hand gesture performed by a worker in front of a laptop camera or webcam. The dataset includes 148,094 RGB video files of 27 kinds of gestures. It is the largest isolated gesture dataset in which each category has more than 5,000 instances on average. Therefore, this dataset was used to train our networks from scratch. IsoGD[19] is a large-scale isolated gesture dataset which contains 47,933 RGB+D gesture videos of 249 kinds of gestures performed by 21 subjects. The dataset has been used in the 2016 [24] and 2017 [25] ChaLearn LAP Large-scale Isolated Gesture Recognition Challenges. This paper has the benefit that results are compared with the state-of-the-art networks used in the challenges. Different multi-modal fusion methods were used by the teams in the challenges. In this paper, only the evaluation on each modality is performed (without multi-modal fusion) to verify the advantages of the different deep architectures. 3.2 Implementation details The base architecture has been displayed in Fig. 1. The Res3D and MobileNet components are deployed from their original versions, except for the aforementioned modifications in Section 2.1. These two components are fixed among the variants. The filter numbers of ConvLSTM and the variants are all set to 256. The networks using the original ConvLSTM or the variants are first trained on the Jester dataset from scratch, and then fine-tuned using the IsoGD dataset to report the final results. For the training on Jester, the learning rate follows a polynomial decay from 0.001 to 0.000001 within a total of 30 epochs. The input is 16 video clips, and each clip contains 16 frames with a spatial size of 112× 112. The uniform sampling with the temporal jitter strategy [26] is utilized to preprocess the inputs. During the fine-tuning with IsoGD, the batch size is set to 8, the temporal length is set to 32, and a total of 15 epochs are performed for each variant. The top-1 accuracy is used as the metric of evaluation. Stochastic gradient descent (SGD) is used for training. 3.3 Explorative study The networks which use the original ConvLSTM or the four variants as the ConvLSTM component in Fig. 1 are evaluated on the Jester and IsoGD datasets respectively. The evaluation results are illustrated in Table 1. The evaluation on Jester has almost the same accuracy except for variant (b). The similar recognition results on Jester may be caused by the network capacity or the distinguishability of the data, because the validation has a comparable accuracy with the training. The lower accuracy of variant (b) may indicate the uselessness of the extra attention mechanism on the inputs, since the learnt spatiotemporal features of 3DCNN have already paid attention to the noticeable spatial regions. The lower accuracy of the variant (b) on IsoGD can also testify this conclusion. The lower accuracy may be due to the additional optimization difficulty caused by the extra multiplication operations in the attention mechanism. The comparison on IsoGD shows that variant (a) is superior to the original ConvLSTM, regardless of the recognition accuracy or the parameter size and the computational consumption. The reduction of the convolutional structures in the three gates will not reduce the network capacity, but can save memory and computational consumption significantly. The specific attention mechanism embedded in the input and output gates cannot contribute to the feature fusion, but it just brings extra memory and computational consumption. These observations demonstrate that the ConvLSTM component only needs to take full use of its advantages on the long-term temporal fusion, when the input features have learnt the local spatiotemporal information. LSTM/RNN has its superiority on the long sequential data processing. The extension from LSTM to ConvLSTM can only increase the dimensionality of the states and memory, and keep the original gate mechanism unchanged. This evaluation leads to a new variant of LSTM (i.e., variant (a) of ConvLSTM), in which the convolutional structures are only introduced into the input-to-state transition, and the gates still have the original fully-connected mechanism . The added convolutional structures make the variant of LSTM capable of performing the spatiotemporal feature fusion. The gate mechanism still sticks to its own responsibility and superiority for the long-term temporal fusion. 3.4 Comparison with the state-of-the-art Table 2 shows the comparison results with the state-of-the-art networks on IsoGD. The 2DCNN networks demonstrate their unbeatable superiority on the image-based applications, and also show their ability for action recognition with the help of the specific backgrounds and objects. But, they do not keep their unbeatable performances in the case of gesture recognition, where the fine-grained spatiotemporal features of hands and the global motions of arms do matter. The 3DCNN networks are good at the spatiotemporal feature learning. But, the weakness on long-term temporal fusion restricts their capabilities. The "3DCNN+ConvLSTM+2DCNN" architecture takes full use of the advantages of 3DCNN, ConvLSTM and 2DCNN. The proposed variant (a) of ConvLSTM further enhances ConvLSTM’s ability for spatiotemporal feature fusion, without any additional burden. Therefore, the best recognition results can be obtained by taking full use of the intrinsic advantages of the different networks. Although the reference [27] reports the state-of-the-art performance on IsoGD, the high accuracy is achieved by fusing 12 channels (i.e., global/left/right channels for four modalities). The proposed network obtains the best accuracy on each single modality. This exactly demonstrates the superiority of the proposed architecture. 3.5 Visualization of the feature map fusion The reduction of the convolutional structures of the three gates in ConvLSTM brings no side effects to spatiotemporal feature map fusion. Fig. 3 displays an example of visualization of the feature map fusion along with the recurrent steps. It can be seen from the heat maps that the most active regions just reflect the hands’ motion trajectories. These are similar to the attention score maps. This also indicates that the learnt spatiotemporal features from 3DCNN have paid attention to the noticeable spatial regions, and no extra attention mechanism is needed when fusing the long-term spatiotemporal feature maps using ConvLSTM. The reduction of the convolutional structures of the three gates in ConvLSTM makes the variant more applicable for constructing more complex deep architectures, since this variant has fewer parameters and computational consumption. 4 Conclusion The effects of attention in Convolutional LSTM are explored in this paper. Our evaluation results and previous published results show that the convolutional structures in the gates of ConvLSTM do not play the role of spatial attention, even if the gates have independent weight values for each element of the feature maps in the spatial domain. The reduction of the convolutional structures in the three gates results in a better accuracy, a lower parameter size and a lower computational consumption. This leads to a new variant of LSTM, in which the convolutional structures are only added to the input-to-state transition, and the gates still stick to their own responsibility and superiority for long-term temporal fusion. This makes the proposed variant capable of effectively performing spatiotemporal feature fusion, with fewer parameters and computational consumption. Acknowledgments This work is partially supported by the National Natural Science Foundation of China under Grant No.61702390, and the Fundamental Research Funds for the Central Universities under Grant JB181001.
1. What is the focus of the paper in terms of its contributions and novel aspects? 2. What are the strengths of the proposed approach, particularly in terms of its ability to capture spatiotemporal relationships? 3. What are the weaknesses of the paper regarding its claims and experiments? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns regarding the runtime efficiency of the proposed method, especially when compared to the original ConvLSTM module?
Review
Review This is a form of an ablation study for the case of Conv-LSTMs where the inputs are specifically 3DCNN feature maps. The paper demonstrats that some standard operations of the ConvLSTM module (namely convolutional structure of gates and attention) do not, in fact, contribute meaningfully to learning better models. This is intuitively sensible since the spatiotemporal relationships are already captured in the input features. This is evaluated for gesture recognition. Furthermore, this intuition and experimental verification inspire a new architecture different from ConvLSTM or a fully connected LSTM which retains the performance of Conv LSTM with considerably less parameters. The paper is written clearly. I would advise to keep 8-15 in one block w/o page break for readability. The main reason for the score is that this is practically useful but from a research point view it's a relatively minor contribution as it is a form of relatively straight forward architecture exploration. How is the run time affected by the reduced parameterization? That should be reported in the paper.
NIPS
Title Attention in Convolutional LSTM for Gesture Recognition Abstract Convolutional long short-term memory (LSTM) networks have been widely used for action/gesture recognition, and different attention mechanisms have also been embedded into the LSTM or the convolutional LSTM (ConvLSTM) networks. Based on the previous gesture recognition architectures which combine the threedimensional convolution neural network (3DCNN) and ConvLSTM, this paper explores the effects of attention mechanism in ConvLSTM. Several variants of ConvLSTM are evaluated: (a) Removing the convolutional structures of the three gates in ConvLSTM, (b) Applying the attention mechanism on the input of ConvLSTM, (c) Reconstructing the input and (d) output gates respectively with the modified channel-wise attention mechanism. The evaluation results demonstrate that the spatial convolutions in the three gates scarcely contribute to the spatiotemporal feature fusion, and the attention mechanisms embedded into the input and output gates cannot improve the feature fusion. In other words, ConvLSTM mainly contributes to the temporal fusion along with the recurrent steps to learn the long-term spatiotemporal features, when taking as input the spatial or spatiotemporal features. On this basis, a new variant of LSTM is derived, in which the convolutional structures are only embedded into the input-to-state transition of LSTM. The code of the LSTM variants is publicly available2. 1 Introduction Long short-term memory (LSTM) [1] recurrent neural networks are widely used to process sequential data [2]. Several variants of LSTM have been proposed since its inception in 1995 [3]. By extending the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions, Shi et al. [4] proposed the convolutional LSTM (ConvLSTM) network to process sequential images for precipitation nowcasting. Thereafter, ConvLSTM has been used for action recognition [5, 6], gesture recognition [7–9] and in other fields [10–12]. When LSTM is used to process video or sequential images, the spatial features of two-dimensional convolutional ∗Equal Contribution 2https://github.com/GuangmingZhu/AttentionConvLSTM 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. neural networks (2DCNN) are generally vectorized before feeding them as input of LSTM [13, 14]. However, the two-dimensional spatial feature maps can be fed into ConvLSTM directly, without the loss of the spatial correlation information. For example, the spatial feature maps of AlexNet/VGG-16 [5, 10] or the spatiotemporal feature maps of three-dimensional CNN (3DCNN) [7, 8] are used as input of ConvLSTM. ConvLSTM was originally proposed to take images as input for precipitation nowcasting, the spatial convolutions are therefore necessary to learn the spatiotemporal features. However, how much do the convolutional structures of ConvLSTM contribute to the feature fusion when ConvLSTM takes as input the spatial convolutional features instead of images? Is it necessary to have different gate values for each element of the feature maps in the spatial domain? The effect of the convolutional structures in ConvLSTM can be analyzed in three cases. (a) ConvLSTM takes original images as input. In this case, the convolutional structures are crucial to learn the spatiotemporal features, as verified in [4]. (b) ConvLSTM takes the feature maps of 2DCNN as input. In this case, the effect of the convolutional structures is not always remarkable. Intuitively, the three gates of ConvLSTM can be viewed as the weighting mechanism for the feature map fusion. However, the different gates values for each element of the feature maps in the spatial domain seemingly do not have the function of spatial attention. Therefore, the soft attention mechanism [15] is additionally introduced into the input of ConvLSTM in [5], in order to make ConvLSTM focus on the noticeable spatial features. The improvement (as illustrated in Table 1 of [5]) caused by the attention mechanism on the input can also verify the above claim in some degree. (c) ConvLSTM takes the feature maps of 3DCNN as input. Since the 3DCNN networks have learnt the spatiotemporal features, the gates of ConvLSTM are more unlikely to have the function of spatial attention. The last case will be analyzed thoroughly in this paper. Based on our previous published "3DCNN+ConvLSTM+2DCNN" architecture [8], we construct a preliminary "Res3D+ConvLSTM+MobileNet" architecture and derive four variants of the ConvLSTM component. In the preliminary "Res3D+ConvLSTM+MobileNet" architecture, the blocks 1-4 of Res3D [16] are used first to learn the local short-term spatiotemporal feature maps which have a relatively large spatial size. Then, two ConvLSTM layers are stacked to learn the global long-term spatiotemporal feature maps. Finally, parts of MobileNet [17] are used to learn deeper features based on the learnt two-dimensional spatiotemporal feature maps. The Res3D and MobileNet blocks are fixed, and the ConvLSTM component is modified to derive four variants: (a) Removing the convolutional structures of the gates by performing the spatial global average pooling on the input and the hidden states ahead. This means that the convolutional operations in the three gates are reduced to the fully-connected operations. The convolutional structures for the input-to-state transition are reserved to learn the spatiotemporal features. (b) Applying the soft attention mechanism to the input (i.e., the feature maps of the Res3D block) of ConvLSTM. (c) Reconstructing the input gate using the channel-wise attention mechanism. (d) Reconstructing the output gate using the channel-wise attention mechanism. We do not re-evaluate the cases that ConvLSTM takes as input images or features of 2DCNN in this paper, since the experiments in [4] and [5] can demonstrate the aforementioned claims. We focus on the evaluation of the third case on the large-scale isolated gesture datasets Jester [18] and IsoGD [19], since the "3DCNN+ConvLSTM+2DCNN" architecture was originally proposed for gesture recognition. Experimental results demonstrate that neither the convolutional structures in the three gates of ConvLSTM nor the extra spatial attention mechanisms contribute in the performance improvements, given the fact that the input spatiotemporal features of 3DCNN have paid attention to the noticeable spatial features. The exploring on the attention in ConvLSTM leads to a new variant of LSTM, which is different from the FC-LSTM and ConvLSTM. Specifically, the variant only brings the spatial convolutions to the input-to-state transition, and keeps the gates the same as the gates of FC-LSTM. 2 Attention in ConvLSTM To ensure the completeness of the paper, the preliminary "Res3D+ConvLSTM+MobileNet" architecture is first described. Then, the variants of ConvLSTM are elaborated and analyzed. 2.1 The preliminary architecture Two-streams or 3DCNN based networks are widely used for action recognition, such as the famous TSN [20], C3D [21], Res3D [16], and I3D [22] networks. Gesture recognition is different from action recognition. You cannot tell the categories of the dynamic gestures when you only look at an image once. But, you may tell when you just look at an image of actions, under the hints of the backgrounds, objects and postures. Therefore, the aforementioned famous networks cannot produce the state-of-the-art performances on gesture recognition, without including multimodal fusion. Gestures focus on the local information of hands and the global motions of arms. Thus, we use a shallow 3DCNN to learn the local short-term spatiotemporal features first. The 3DCNN block does not need to be deep, since it focuses on the local features. Therefore, the modified blocks 1-4 of Res3D are used. The temporal duration (or spatial size) of the outputted feature maps is only shrunk by a ratio of 2 (or 4), compared with the inputted images. Then, a two-layer ConvLSTM network is stacked to learn the long-term spatiotemporal feature maps. The ConvLSTM network does not shrink the spatial size of the feature maps. Thus, the spatiotemporal feature maps still have a relative large spatial size. The top layers of MobileNet, whose inputs have the same spatial size, are further stacked to learn deeper features. The comparison with the aforementioned famous networks will be given in the experimental part to demonstrate the advantages of the architecture (as displayed in Fig. 1). 2.2 The variants of ConvLSTM Formally, ConvLSTM can be formulated as: it = σ(Wxi ∗Xt +Whi ∗Ht−1 + bi) (1) ft = σ(Wxf ∗Xt +Whf ∗Ht−1 + bf ) (2) ot = σ(Wxo ∗Xt +Who ∗Ht−1 + bo) (3) Gt = tanh(Wxc ∗Xt +Whc ∗Ht−1 + bc) (4) Ct = ft ◦ Ct−1 + it ◦Gt (5) Ht = ot ◦ tanh(Ct) (6) where σ is the sigmoid function, Wx∼ and Wh∼ are 2-d convolution kernels. The input Xt , the cell state Ct , the hidden state Ht, the candidate memory Gt, and the gates it, ft, ot are all 3D tensors. The symbol "*" denotes the convolution operator, and "o" denotes the Hadamard product. The input Xt has a spatial size of W ×H with Cin channels, and ConvLSTM has a convolutional kernel size of K ×K with Cout channels. Thus, the parameter size of ConvLSTM can be calculated as3: ParamConvLSTM = K ×K × (Cin + Cout)× Cout × 4 (7) The parameter size of ConvLSTM is very large, partly due to the convolutional structures. It can be concluded from Eqs. (1)-(6) that the gates it, ft, ot have a spatial size of W ×H with Cout channels4. It means that the three gates have independent values for each element of the feature maps in the cell state and the candidate memory. In this case, can ConvLSTM focus on the noticeable spatial regions with the help of different gate values in the spatial domain? In order to provide an answer and remove any doubt, four variants of ConvLSTM are constructed as follows (as illustrated in Fig. 2). 3The biases are ignored for simplicity. 4It is assumed that the convolutional structures have the same-padding style. (a) Removing the convolutional structures of the gates Given the local spatiotemporal features of the 3DCNN block, it can be considered that the 3DCNN block has paid attention to the noticeable spatial regions where there is valuable spatiotemporal information. Therefore, the ConvLSTM block can just focus on the spatiotemporal feature fusion along with the recurrent steps. The gate values are only needed to be calculated for each feature map of the states, not for each element. Therefore, a global average pooling is performed on the input features and the hidden states to reduce the spatial dimension, so that fully-connected operations can be performed instead of convolutions in the gates. The variant of ConvLSTM can be formulated as: Xt = GlobalAveragePooling(Xt) (8) Ht−1 = GlobalAveragePooling(Ht−1) (9) it = σ(WxiXt +WhiHt−1 + bi) (10) ft = σ(WxfXt +WhfHt−1 + bf ) (11) ot = σ(WxoXt +WhoHt−1 + bo) (12) Gt = tanh(Wxc ∗Xt +Whc ∗Ht−1 + bc) (13) Ct = ft ◦ Ct−1 + it ◦Gt (14) Ht = ot ◦ tanh(Ct) (15) The gates it, ft and ot are all one-dimensional vectors, so that the elements in each feature map are weighted by the same gate value in Eqs. (14)-(15). The convolutional structures in the three gates are reduced to fully-connected operations. The convolutional structures for the input-to-state transition (as in Eq. (13)) are reserved for the spatiotemporal feature fusion. In order to reduce the numbers of parameters of the input-to-state transition, the depthwise separable convolutions [23] are used. This reduces the parameter size of the variant of ConvLSTM to ParamConvLSTMva = (K ×K + Cout × 4)× (Cin + Cout) (16) Three more variants are constructed based on variant (a), in order to verify whether the spatial attention can improve the performances. (b) Applying the attention mechanism to the inputs By referring to [5], we apply the spatial attention mechanism to the inputs before the operations of Eqs.(8)-(15). Formally, the attention mechanism can be formulated as: Zt =Wz ∗ tanh(Wxa ∗Xt +Wha ∗Ht−1 + ba) (17) Aijt = p(attij |Xt, Ht−1) = exp(Zijt )∑ i ∑ j exp(Z ij t ) (18) X̃t = At ◦Xt (19) where At is a 2-d score map, and Wz is the 2-d convolution kernel with a kernel size of K ×K × Cin × 1. The variant (b) can be constructed by replacing Xt in Eqs.(8)-(15) with X̃t. The parameter size of this variant can be calculated as ParamConvLSTMvb = ParamConvLSTMva+K×K× (Cin+Cout× 2)+ (Cin+Cout)×Cout (20) (c) Reconstructing the input gate using the channel-wise attention Both the gate and the attention mechanisms need to perform convolutions on the input and the hidden states, as expressed in Eqs. (1)-(3) and Eq. (17). Does this mean that the gate mechanism has the function of attention implicitly? The answer is no. The independent gate values in the spatial domain of the feature maps cannot ensure the attention effect as expressed in Eq. (18). Therefore, we reconstruct the input gate according to the attention mechanism. The sigmoid activation function makes the gate values fall in the range 0-1. The division by the sum in Eq. (18) results in attention scores whose sum is 1 in each feature channel. This means that the attention scores in each feature channel may be far less than 1, and far less than most of the normal gate values in other gates, given the large spatial size of the input feature maps. Therefore, the attention mechanism needs to be modified to match the range of the sigmoid function in the gates. Formally, the input gate can be reformulated as: Zt =Wi ∗ tanh(Wxi ∗Xt +Whi ∗Ht−1 + bi) (21) Aijt (c) = exp(Zijt (c)) max i,j exp(Zijt (c)) (22) it = {Aijt (c) : (i, j, c) ∈ RW×H×Cout} (23) where Wi is a 2-d convolution kernel with a kernel size of W ×H and a channel num of Cout. The "max i,j exp(Zijt (c))” in Eq. (22) corresponds to the maximum element chosen within the channel c of Zt. In other words, the normalization in Eq. (22) is performed channel-wise. The division by the maximum value instead of the sum ensures that the attention scores are distributed in the range of 0-1. Variant (c) of ConvLSTM can be constructed by replacing the input gate of variant (a) with the new gate expressed by Eqs. (21)-(23). The parameter size of this variant can be calculated as ParamConvLSTMvc = ParamConvLSTMva +K ×K × (Cin +Cout × 2) +Cout ×Cout (24) (d) Reconstructing the output gate using the channel-wise attention Variant (b) of ConvLSTM applies the attention mechanism on the input feature maps, while variant (c) applies the attention mechanism on the candidate memory. Finally, variant (d) of ConvLSTM is constructed by applying the attention mechanism on the cell state. In other words, the output gate is reconstructed in the same way as the input gate in variant (c). The expressions are similar as in Eqs. (21)-(23), and they are thus omitted for simplicity. 3 Experiments The case in which ConvLSTM takes features from 2DCNN as input has been evaluated in [5], and the improvement (as illustrated in Table 1 of [5]) caused by the attention mechanism on the input features can indicate, in some degree, that the convolutional structures in the gates cannot play the role of spatial attention. Due to page restrictions, this paper only focuses on the evaluation of the case in which ConvLSTM takes features from 3DCNN as input. As aforementioned, the "3DCNN+ConvLSTM+2DCNN" architecture was originally proposed for gesture recognition [8]. Therefore, the proposed variants of ConvLSTM are evaluated on the large-scale isolated gesture datasets Jester [18] and IsoGD [19] in this paper. 3.1 Datasets Jester[18] is a large collection of densely-labeled video clips. Each clip contains a pre-defined hand gesture performed by a worker in front of a laptop camera or webcam. The dataset includes 148,094 RGB video files of 27 kinds of gestures. It is the largest isolated gesture dataset in which each category has more than 5,000 instances on average. Therefore, this dataset was used to train our networks from scratch. IsoGD[19] is a large-scale isolated gesture dataset which contains 47,933 RGB+D gesture videos of 249 kinds of gestures performed by 21 subjects. The dataset has been used in the 2016 [24] and 2017 [25] ChaLearn LAP Large-scale Isolated Gesture Recognition Challenges. This paper has the benefit that results are compared with the state-of-the-art networks used in the challenges. Different multi-modal fusion methods were used by the teams in the challenges. In this paper, only the evaluation on each modality is performed (without multi-modal fusion) to verify the advantages of the different deep architectures. 3.2 Implementation details The base architecture has been displayed in Fig. 1. The Res3D and MobileNet components are deployed from their original versions, except for the aforementioned modifications in Section 2.1. These two components are fixed among the variants. The filter numbers of ConvLSTM and the variants are all set to 256. The networks using the original ConvLSTM or the variants are first trained on the Jester dataset from scratch, and then fine-tuned using the IsoGD dataset to report the final results. For the training on Jester, the learning rate follows a polynomial decay from 0.001 to 0.000001 within a total of 30 epochs. The input is 16 video clips, and each clip contains 16 frames with a spatial size of 112× 112. The uniform sampling with the temporal jitter strategy [26] is utilized to preprocess the inputs. During the fine-tuning with IsoGD, the batch size is set to 8, the temporal length is set to 32, and a total of 15 epochs are performed for each variant. The top-1 accuracy is used as the metric of evaluation. Stochastic gradient descent (SGD) is used for training. 3.3 Explorative study The networks which use the original ConvLSTM or the four variants as the ConvLSTM component in Fig. 1 are evaluated on the Jester and IsoGD datasets respectively. The evaluation results are illustrated in Table 1. The evaluation on Jester has almost the same accuracy except for variant (b). The similar recognition results on Jester may be caused by the network capacity or the distinguishability of the data, because the validation has a comparable accuracy with the training. The lower accuracy of variant (b) may indicate the uselessness of the extra attention mechanism on the inputs, since the learnt spatiotemporal features of 3DCNN have already paid attention to the noticeable spatial regions. The lower accuracy of the variant (b) on IsoGD can also testify this conclusion. The lower accuracy may be due to the additional optimization difficulty caused by the extra multiplication operations in the attention mechanism. The comparison on IsoGD shows that variant (a) is superior to the original ConvLSTM, regardless of the recognition accuracy or the parameter size and the computational consumption. The reduction of the convolutional structures in the three gates will not reduce the network capacity, but can save memory and computational consumption significantly. The specific attention mechanism embedded in the input and output gates cannot contribute to the feature fusion, but it just brings extra memory and computational consumption. These observations demonstrate that the ConvLSTM component only needs to take full use of its advantages on the long-term temporal fusion, when the input features have learnt the local spatiotemporal information. LSTM/RNN has its superiority on the long sequential data processing. The extension from LSTM to ConvLSTM can only increase the dimensionality of the states and memory, and keep the original gate mechanism unchanged. This evaluation leads to a new variant of LSTM (i.e., variant (a) of ConvLSTM), in which the convolutional structures are only introduced into the input-to-state transition, and the gates still have the original fully-connected mechanism . The added convolutional structures make the variant of LSTM capable of performing the spatiotemporal feature fusion. The gate mechanism still sticks to its own responsibility and superiority for the long-term temporal fusion. 3.4 Comparison with the state-of-the-art Table 2 shows the comparison results with the state-of-the-art networks on IsoGD. The 2DCNN networks demonstrate their unbeatable superiority on the image-based applications, and also show their ability for action recognition with the help of the specific backgrounds and objects. But, they do not keep their unbeatable performances in the case of gesture recognition, where the fine-grained spatiotemporal features of hands and the global motions of arms do matter. The 3DCNN networks are good at the spatiotemporal feature learning. But, the weakness on long-term temporal fusion restricts their capabilities. The "3DCNN+ConvLSTM+2DCNN" architecture takes full use of the advantages of 3DCNN, ConvLSTM and 2DCNN. The proposed variant (a) of ConvLSTM further enhances ConvLSTM’s ability for spatiotemporal feature fusion, without any additional burden. Therefore, the best recognition results can be obtained by taking full use of the intrinsic advantages of the different networks. Although the reference [27] reports the state-of-the-art performance on IsoGD, the high accuracy is achieved by fusing 12 channels (i.e., global/left/right channels for four modalities). The proposed network obtains the best accuracy on each single modality. This exactly demonstrates the superiority of the proposed architecture. 3.5 Visualization of the feature map fusion The reduction of the convolutional structures of the three gates in ConvLSTM brings no side effects to spatiotemporal feature map fusion. Fig. 3 displays an example of visualization of the feature map fusion along with the recurrent steps. It can be seen from the heat maps that the most active regions just reflect the hands’ motion trajectories. These are similar to the attention score maps. This also indicates that the learnt spatiotemporal features from 3DCNN have paid attention to the noticeable spatial regions, and no extra attention mechanism is needed when fusing the long-term spatiotemporal feature maps using ConvLSTM. The reduction of the convolutional structures of the three gates in ConvLSTM makes the variant more applicable for constructing more complex deep architectures, since this variant has fewer parameters and computational consumption. 4 Conclusion The effects of attention in Convolutional LSTM are explored in this paper. Our evaluation results and previous published results show that the convolutional structures in the gates of ConvLSTM do not play the role of spatial attention, even if the gates have independent weight values for each element of the feature maps in the spatial domain. The reduction of the convolutional structures in the three gates results in a better accuracy, a lower parameter size and a lower computational consumption. This leads to a new variant of LSTM, in which the convolutional structures are only added to the input-to-state transition, and the gates still stick to their own responsibility and superiority for long-term temporal fusion. This makes the proposed variant capable of effectively performing spatiotemporal feature fusion, with fewer parameters and computational consumption. Acknowledgments This work is partially supported by the National Natural Science Foundation of China under Grant No.61702390, and the Fundamental Research Funds for the Central Universities under Grant JB181001.
1. What is the focus of the paper regarding gesture recognition? 2. What are the strengths of the proposed approach, particularly in terms of attention mechanisms and pipeline architecture? 3. Do you have any concerns or questions about the paper's experiments and comparisons with other works? 4. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content? 5. Are there any additional related works that could be considered for further research in integrating CNNs and recurrent architectures?
Review
Review SUMMARY: This paper proposes a pipeline combining Res3D-3DCNN, convLSTM, MobileNet-CNN hybrid architecture for performing gesture recognition. In particular, it explores the integration of pooling and neural attention mechanisms in ConvLSTM cells. Four convLSTM variants are compared, which place pooling and attention mechanisms at different locations inside the cell. The pooling leads to a somewhat novel LSTM/convLSTM hybrid architecture. Finally, an attention-inspired gating mechanism is proposed, with some differences to the formulation in [5]. The proposed architecture is evaluated on the Jester and ISOGD data sets and achieves state-of-the-art on the RGB-only setting of IsoGD. QUALITY: In the LSTM gates, fully connected weight matrices and matrix multiplication are used. Are there good reasons for doing this? In the original LSTM paper [3], a two vector Hadamard product is proposed (section II.A on the forward pass) instead of matrix multiplication. It will be of interest to compare the matrix-matrix or matrix-vector Hadamard-products in the gates. Following [3] it could also be interesting to explore gating based on output values instead of the cell state. The proposed max-pooling approach should allow this given that same padding was used in the convolutions, so that the spatial resolution inside and outside of the cell should remain the same. Published attention mechanisms have thus far been based on the notion of probability distributions, e.g. in [r6], [r7] or even the cited [5] on which the current attention mechanism is based. However, the proposed "attention" mechanism in equation (22) breaks the probabilistic interpretation by using a max instead of a sum operation. The explanation given in 158-159 is not very clear, why using a max (and not a sum) will distribute the scores between 0-1. Experimentation wise, all proposed modifications have been tested. Additionally a comparison of the attention weights used in variants (c) and possibly a classical soft attention formulation as described in equation (18) would have been interesting in order to enable readers to asses the newly proposed mechanism from eq. (22). Overall the results are state of the art and suffice to support that the authors successfully reduced the number of convLSTM parameters though it is hard to asses the modified attention mechanism proposed by the authors. In particular, looking at the supplementary material results, it looks as if (since the subjects are all stationary and the only movement in the scene is relevant to the hand gesture) that a simple flow detection would suffice. To prove that the attention mechanism is actually working, one would need a baseline comparing against optical flow / movement detection. Also is there some border effects in the attention? There seem to be very strong response in the bottom edge of the scene for almost all samples? Some comparisons to other methods are missing, e.g. the experimental results on the Jester-Data set are not compared to state-of-the-art; they are better than [r4] but worse than [r2]. [r2] reports an accuracy of 96.33% on the Jester V1 data set (using flow and RGB) while [r1] reports an accuracy of 80.96% on the IsoGD-Data set. The paper provides a comparison only to [8], and only to the RGB results, which are significantly worse (51.31% / 58.65% for RGBD). Finally, some statistical analysis would make the results more convincing. CLARITY: The paper is relatively easy to read and follow, though some important details are omitted, e.g. What are the performance metrics being used for evaluation? What variant of gradient descent is used? ORIGINALITY: Attention has been shown to be beneficial in vision problems. The pooling approach is somewhat interesting. Additional related works on gesture recognition are given below in [r1, r2]. Other related works to convLSTM is [r5] which presents a convGRU; an interesting aside is [r3], which found that recurrent dropout may be applied on past state values during gate and candidate value computation. This means that for an RNN to function, past states must not be available in their original values at the gates. This may be why GlobalAveragePooling can be regarded as an interesting choice in the convLSTM case when computing gate values. SIGNIFICANCE: The main contribution of the paper is the experimentation; the paper tests four variants of LSTMs to test the effects of attention in LSTMs. It is found that spatial convolutions in the LSTM gates do not contribute to spatiotemporal feature fusion; furthermore, attention mechanisms embedded into the input and output gates also do not improve the feature fusion. As such, the authors make a recommendation for a new cell architecture which reduces the number of required weights, leading to smaller memory footprints and less training time. Such a cell may also be interesting for those working in action recognition and video understanding. The work adds to the body of literature for anyone working on integrating CNNs and recurrent architures. EXTRA REFERENCES: [r1] Gesture Recognition: Focus on the Hands by Pradyumna Narayana and J. Ross Beveridge in CVPR 2018 [r2] Motion Fused Frames: Data Level Fusion Strategy for Hand Gesture Recognition by Okan Köpüklü and Neslihan Köse and Gerhard Rigoll in CVPR Workshops 2018 [r3] Recurrent Dropout without Memory Loss by Stanislau Semeniuta, Aliaksei Severyn, Erhardt Barth in Proceedings of COLING 2016 [r4] Temporal Relational Reasoning in Videos by Bolei Zhou and Alex Andonian and Antonio Torralba https://arxiv.org/pdf/1711.08496.pdf [r5] Deep Learning for Precipitation Nowcasting: A Benchmark and A New Model by Xingjian Shi et al. in (NIPS 2017) [r6] Listen, Attend and Spell by Chan, William and Jaitly, Navdeep and Le, Quoc V. and Vinyals, Oriol in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) [r7] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR 2015.
NIPS
Title Hyperparameter Ensembles for Robustness and Uncertainty Quantification Abstract Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, we improve upon both deep and batch ensembles. N/A 1 Introduction Neural networks are well-suited to form ensembles of models [30]. Indeed, neural networks trained from different random initialization can lead to equally wellperforming models that are nonetheless diverse in that they make complementary errors on held-out data [30]. This property is explained by the multi-modal nature of their loss landscape [24] and the randomness induced by both their initialization and the stochastic methods commonly used to train them [8, 38, 9]. Many mechanisms have been proposed to further foster diversity in ensembles of neural networks, e.g., based on cyclical learning rates [36] or Bayesian analysis [17]. In this paper, we focus on exploiting the diversity induced by combining neural networks defined by different hyperparameters. This concept is already wellestablished [13] and the auto-ML community actively applies it [21, 65, 53, 46]. We build upon this research with the following two complementary goals. First, for performance independent of computational and memory budget, we seek to improve upon deep ensembles [43], the current state-of-the-art ensembling method in terms of robustness and uncertainty quantification [64, 28]. To this end, we develop a simple stratification scheme which combines random search and the greedy selection of hyperparameters from [13] with the benefit 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. of multiple random initializations per hyperparameter like in deep ensembles. Figure 1 illustrates our algorithm for a Wide ResNet 28-10 where it leads to substantial improvements, highlighting the benefits of combining different initialization and hyperparameters. Second, we seek to improve upon batch ensembles [69], the current state-of-the-art in efficient ensembles. To this end, we propose a parameterization combining that of [69] and self-tuning networks [52], which enables both weight and hyperparameter diversity. Our approach is a drop-in replacement that outperforms batch ensembles and does not need a separate tuning of the hyperparameters. 1.1 Related work Ensembles over neural network weights. Combining the outputs of several neural networks to improve their single performance has a long history, e.g., [47, 30, 25, 41, 58, 15]. Since the quality of an ensemble hinges on the diversity of its members [30], many mechanisms were developed to generate diverse ensemble members. For instance, cyclical learning-rate schedules can explore several local minima [36, 76] where ensemble members can be snapshot. Other examples are MC dropout [23] or the random initialization itself, possibly combined with the bootstrap [45, 43]. More generally, Bayesian neural networks can be seen as ensembles with members being weighted by the (approximated) posterior distribution over the parameters [34, 51, 56, 7, 71, 72]. Hyperparameter ensembles. Hyperparameter-tuning methods [20] typically produce a pool of models from which ensembles can be constructed post hoc, e.g., [65]. This idea has been made systematic as part of auto-sklearn [21] and successfully exploited in several other contexts, e.g., [19] and specifically for neural networks [53] as well as in computer vision [60] and genetics [35]. In particular, the greedy ensemble construction from [13] (and later variations thereof [12]) was shown to work best among other algorithms, either more expensive or more prone to overfitting. To the best of our knowledge, such ensembles based on hyperparameters have not been studied in the light of predictive uncertainty. Moreover, we are not aware of existing methods to efficiently build such ensembles, similarly to what batch ensembles do for deep ensembles. Finally, recent research in Bayesian optimization has also focused on directly optimizing the performance of the ensemble while tuning the hyperparameters [46]. Hyperparameter ensembles also connect closely to probabilistic models over structures. These works often analyze Bayesian nonparametric distributions, such as over depth and width of a neural network, leveraging Markov chain Monte Carlo for inference [37, 1, 18, 42]. In this work, we examine more parametric assumptions, building on the success of variational inference and mixture distributions: for example, the validation step in hyper-batch ensemble can be viewed as a mixture variational posterior and the entropy penalty is the ELBO’s KL divergence toward a uniform prior. Concurrent to our paper, [75] construct neural network ensembles within the context of neural architecture search, showing improved robustness for predictions with distributional shift. One of their methods, NES-RS, has similarities with our hyper-deep ensembles (see Section 3), also relying on both random search and [13] to form ensembles, but do not stratify over different initializations. We vary the hyperparameters while keeping the architecture fixed while [75] study the converse. Furthermore, [75] do not explore a parameter- and computationally-efficient method (see Section 4). Efficient hyperparameter tuning & best-response function. Some hyperparameters of a neural network, e.g., its L2 regularization parameter(s), can be optimized by estimating the best-response function [26], i.e., the mapping from the hyperparameters to the parameters of the neural networks solving the problem at hand [11]. Learning this mapping is an instance of learning an hypernetwork [61, 62, 29] and falls within the scope of bilevel optimization problems [14]. Because of the daunting complexity of this mapping, [50, 52] proposed scalable local approximations of the best-response function. Similar methodology was also employed for style transfer and image compression [3, 16]. The self-tuning networks from [52] are an important building block of our approach wherein we extend their setting to the case of an ensemble over different hyperparameters. 1.2 Contributions We examine two regimes to exploit hyperparameter diversity: (a) ensemble performance independent of budget and (b) ensemble performance seeking parameter efficiency, where, respectively, deep and batch ensembles [43, 69] are state-of-the-art. We propose one ensemble method for each regime: (a) Hyper-deep ensembles. We define a greedy algorithm to form ensembles of neural networks exploiting two sources of diversity: varied hyperparameters and random initialization. By stratifying models with respect to the latter, our algorithm subsumes deep ensembles that we outperform in our experiments. Our approach is a simple, strong baseline that we hope will be used in future research. (b) Hyper-batch ensembles. We efficiently construct ensembles of neural networks defined over different hyperparameters. Both the ensemble members and their hyperparameters are learned endto-end in a single training procedure, directly maximizing the ensemble performance. Our approach outperforms batch ensembles and generalizes the layer structure of [52] and [69], while keeping their original memory compactness and efficient minibatching for parallel training and prediction. We illustrate the benefits of our two ensemble methods on image classification tasks, with multi-layer perceptron, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, in terms of both predictive performance and uncertainty. The code for generic hyper-batch ensemble layers can be found in https://github.com/google/edward2 and the code to reproduce the experiments of Section 5.2 is part of https://github.com/google/uncertainty-baselines. 2 Background We introduce notation and background required to define our approach. Consider an i.i.d. classification setting with data D = {(xn, yn)}Nn=1 where xn ∈ Rd is the feature vector corresponding to the n-th example and yn its class label. We seek to learn a classifier in the form of a neural network fθ where all its parameters (weights and bias terms) are summarized in θ ∈ Rp. In addition to its primary parameters θ, the model fθ will also depend on m hyperparameters that we refer to as λ ∈ Rm. For instance, an entry in λ could correspond to the dropout rate of a given layer in fθ. Equipped with some loss function `, e.g., the cross entropy, and some regularization term Ω(·,λ), e.g., the squared L2 norm with a strength defined by an entry of λ, we are interested in θ̂(λ) ∈ arg min θ∈Rp E(x,y)∈D [ L(x, y,θ,λ) ] with L(x, y,θ,λ) = `(fθ(x,λ), y) + Ω(θ,λ), (1) where E(x,y)∈D[·] stands for the expectation with a uniform distribution over D. As we shall see in Section 5, the loss ` = `λ can also depend on λ, for instance to control a label smoothing parameter [67]. In general, λ is chosen based on some held-out evaluation metric by grid search, random search [6] or more sophisticated hyperparameter-tuning methods [20]. 2.1 Deep ensembles and batch ensembles Deep ensembles [43] are a simple ensembling method where neural networks with different random initialization are combined. Deep ensembles lead to remarkable predictive performance and robust uncertainty estimates [64, 28]. Given some hyperparameters λ0, a deep ensemble of size K amounts to solving K times (1) with random initialization and aggregating the outputs of {fθ̂k(λ0)(·,λ0)} K k=1. Batch ensembles [69] are a state-of-the-art efficient alternative to deep ensembles, preserving their performance while reducing their computational and memory burden. To simplify the presentation, we focus on the example of a dense layer in fθ , with weight matrix W ∈ Rr×s where r and s denote the input and output dimensions of the layer respectively. A deep ensemble of size K needs to train, predict with, and store K weight matrices {Wk}Kk=1. Instead, batch ensembles consider a single matrix W ∈ Rr×s together with two sets of auxiliary vectors [r1, . . . , rK ] ∈ Rr×K and [s1, . . . , sK ] ∈ Rs×K such that the role of Wk is played by W ◦ (rks>k ) for each k ∈ {1, . . . ,K}, (2) where we denote by ◦ the element-wise product (which we will broadcast row-wise or column-wise depending on the shapes at play). Not only does (2) lead to a memory saving, but it also allows for efficient minibatching, where each datapoint may use a different ensemble member. Given a batch of inputs X ∈ Rb×r, the predictions for the k-th member equal X[W ◦ (rks>k )] = [(X ◦ r>k )W] ◦ s>k . By properly tiling the batch X, the K members can thus predict in parallel in one forward pass [69]. 2.2 Self-tuning networks Hyperparameter tuning typically involves multiple runs of the training procedure. One efficient alternative [50, 52] is to approximate the best-response function, i.e., the mapping from λ to optimal parameters θ̂(λ). The local approximation of [52] captures the changes of λ by scaling and shifting the hidden units of fθ , which requires in turn extra parameters θ′ ∈ Rp ′ , summarized in Θ = {θ,θ′}. [52] call the resulting approach self-tuning network since fΘ tunes online its own hyperparameters λ. In the sequel, λ will be continuous such as dropout rates, L2 penalties and label smoothing. Example of the dense layer. We illustrate the choice and role of θ′ in the example of a dense layer (the convolutional layer is similar to [59]; see details in [52]). The weight matrix W ∈ Rr×s and bias b ∈ Rs of a dense layer are defined as (with ∆ and δ of the same shapes as W and b respectively), W(λ) = W + ∆ ◦ e(λ)> and b(λ) = b + δ ◦ e′(λ), (3) where e(λ) ∈ Rs and e′(λ) ∈ Rs are real-valued embeddings of λ. In [52], the embedding is linear, i.e., e(λ) = Cλ and e′(λ) = C′λ. In this example, we have original parameters θ = {W,b} as well as the additional parameters θ′ = {∆, δ,C,C′}. Training objective. Since θ′ captures changes in θ induced by changes in λ, [50, 52] replace the typical objective (1), defined for a single value of λ, with an expected objective [50, 52, 16], min Θ∈Rp+p′ Eλ∼p(λ),(x,y)∈D [ L(x, y,Θ,λ) ] , (4) where p(λ) denotes some distribution over the hyperparameters λ. When p is kept fixed during the optimization of (4), the authors of [50] observed that θ̂(λ) is not well approximated and proposed instead to use a distribution pt(λ) = p(λ|ξt) varying with the iteration t. In our work we choose p(·|ξt) to be a log-uniform distribution with ξt containing the bounds of the ranges of λ (see Section 4). The key benefit from (4) is that a single (though, more costly) training gives access to a mapping λ 7→ fΘ̂(·,λ) which approximates the behavior of fΘ̂ for hyperparameters in the support of p(λ). Alternating optimization. The procedure followed by [52] consists in alternating between training and tuning steps. First, the training step performs a stochastic gradient update of Θ in (4), jointly sampling λ ∼ p(λ|ξt) and (x, y) ∈ D. Second, the tuning step makes a stochastic gradient update of ξt by minimizing some validation objective (e.g., the cross entropy): min ξt Eλ∼p(λ|ξt),(x,y)∈Dval [ `val(fΘ(x,λ), y) ] . (5) In (5), derivatives are taken through samples λ ∼ p(λ|ξt) by applying the reparametrization trick [39]. To prevent p(λ|ξt) from collapsing to a degenerate distribution, and inspired by variational inference, the authors of [52] add an entropy regularization termH[·] controlled by τ ≥ 0 so that (5) becomes min ξt Eλ∼p(λ|ξt),(x,y)∈Dval [ `val(fΘ(x,λ), y)− τH[p(λ|ξt)] ] . (6) 3 Hyper-deep ensembles Figure 2-(left) visualizes different models fθ(·,λ) according to their hyperparameters λ along the x-axis and their initialization θinit. on the y-axis. In this view, a deep ensemble corresponds to a “column” where models with different random initialization are combined together, for a fixed λ. On the other hand, a “row” corresponds to the combination of models with different hyperparameters. Such a “row” typically stems from the application of some hyperparameter-tuning techniques [20]. Fixed initialization hyper ensembles. Given the simplicity, broad applicability, and performance of the greedy algorithm from [13]—e.g., in auto-ML settings [21], we use it as our canonical procedure to generate a “row”, i.e., an ensemble of neural networks with fixed parameter initialization and various hyperparameters. We refer to it as fixed init hyper ensemble. For completeness, we recall the procedure from [13] in Appendix A (Algorithm 2, named hyper_ens). Given an input set of models (e.g., from random search), hyper_ens greedily grows an ensemble until some target size K is met by selecting the model with the best improvement of some score, e.g., the validation log-likelihood. We select the models with replacement to be able to learn weighted combinations thereof (see Section 2.1 in [13]). Note that the procedure from [13] does not require the models to have a fixed initialization: we consider here a fixed initialization to isolate the effect of just varying the hyperparameters (while deep ensembles vary only the initialization, with fixed hyperparameters). Our goal is two-fold: (a) we want to demonstrate the complementarity of random initialization and hyperparameters as sources of diversity in the ensemble, and (b) design a simple algorithmic scheme that exploits both sources of diversity while encompassing the construction of deep ensembles as a subcase. We defer to Section 5 the study of (a) and next focus on (b). Hyper-deep ensembles. We proceed in three main steps, as summarized in Algorithm 1. In lines 1-2, we first generate one “row” according to hyper_ens based on the results of random search [6] as input. We then tile and stratify that “row” by training the models for different random initialization (see lines 4-7). The resulting set of models is illustrated in Figure 2-(left). In line 10, we finally re-apply hyper_ens on that stratified set of models to extract an ensemble that can exploit the two sources of diversity. By design, a deep ensemble is one possible outcome of this procedure—one “column”—and so is fixed init hyper ensemble described in the previous paragraph—one “row”. Algorithm 1: hyper_deep_ens(K,κ) 1 M0 = {fθj (·,λj)}κj=1←− rand_search(κ); 2 E0 ←− hyper_ens(M0, K) and Estrat. = { }; 3 foreach fθ(·,λ) ∈ E0.unique() do 4 foreach k ∈ {1, . . . ,K} do 5 θ′ ←− random initialization; 6 fθk(·,λ)←− train fθ′(·,λ); 7 Estrat. = Estrat. ∪ { fθk(·,λ)}; 8 end 9 end 10 return hyper_ens(Estrat., K); In lines 1-2, running random search leads to a set of κ models (i.e.,M0). If we were to stratify all of them, we would need K seeds for each of those κ models, hence a total of O(κK) models to train. However, we first apply hyper_ens to extract K models out of the κ available ones, with K κ. The stratification then needs K seeds for each of those K models (lines 4-7), thus O(K2) models to train. We will see in Section 5 that even with standard hyperparameters, e.g., dropout or L2 parameters, Algorithm 1 can lead to substantial improvements over deep ensembles. In Appendix C.7.5, we conduct ablation studies to relate to the top-K strategy used in [60] and NES-RS from [75]. 4 Hyper-batch ensembles This section presents our efficient approach to construct ensembles over different hyperparameters. 4.1 Composing the layer structures of batch ensembles and self-tuning networks The core idea lies in the composition of the layers used by batch ensembles [69] for ensembling parameters and self-tuning networks [52] for parameterizing the layer as an explicit function of hyperparameters. The composition preserves complementary features from both approaches. We continue the example of the dense layer from Section 2.1-Section 2.2. The convolutional layer is described in Appendix B.1. Assuming an ensemble of size K, we have for k ∈ {1, . . . ,K} Wk(λk) = W ◦ (rks>k ) + [∆ ◦ (ukv>k )] ◦ e(λk)> and bk(λk) = bk + δk ◦ e′(λk), (7) where the rk’s (respectively, uk’s) in Rr and sk’s (respectively, vk’s) in Rs are vectors which diversify the shared matrix W (respectively, ∆) in Rr×s; and the bk’s in Rs and δk’s in Rs are the bias terms for each of the K ensemble members. We comment on some important properties of (7): • As noted by [69], formulation (2) includes a set of rank-1 factors which diversify individual ensemble member weights. In (7), the rank-1 factors rks>k and ukv > k capture this weight diversity for each respective term. • As noted by [52], formulation (3) captures local hyperparameter variations in the vicinity of some λ. The term [∆ ◦ (ukv>k )] ◦ e(λk)> in (7) extends this behavior to the vicinity of the K hyperparameters {λ1, . . . ,λK} indexing the K ensemble members. • Equation (7) maintains the compactness of the original layers of [52, 69] with a resulting memory footprint about twice as large as [69] and equivalent to [52] up to the rank-1 factors. • Given K hyperparameters {λ1, . . . ,λK} and a batch of inputs X ∈ Rb×r, the structure of (7) preserves the efficient minibatching of [69]. If 1b is the vector of ones in Rb, we can tile X, 1bλ>k and 1be(λk) >, enabling all K members to predict in a single forward pass. • From an implementation perspective, (7) enables direct reuse of existing code, e.g., DenseBatchEnsemble and Conv2DBatchEnsemble from [68]. The implementation of our layers can be found in https://github.com/google/edward2. 4.2 Objective function: from single model to ensemble We first need to slightly overload the notation from Section 2.2 and we write fΘ(x,λk) to denote the prediction for the input x of the k-th ensemble member indexed by λk. In Θ, we pack all the parameters of f , as those described in the example of the dense layer in Section 4.1. In particular, predicting with λk is understood as using the corresponding parameters {Wk(λk),bk(λk)} in (7). Training and validation objectives. We want the ensemble members to account for a diverse combination of hyperparameters. As a result, each ensemble member is assigned its own distribution of hyperparameters, which we write pt(λk) = p(λk|ξk,t) for k ∈ {1, . . . ,K}. Along the line of (4), we consider an expected training objective which now simultaneously operates over ΛK = {λk}Kk=1 min Θ EΛK∼qt,(x,y)∈D [ L(x, y,Θ,ΛK) ] with qt ( ΛK ) = q(ΛK |{ξk,t}Kk=1) = K∏ k=1 pt(λk) (8) and where L, compared with (1), is extended to handle the ensemble predictions L(x, y,Θ,ΛK) = ` ( {fΘ(x,λk)}Kk=1, y ) + Ω ( Θ, {λk}Kk=1 ) . For example, the loss ` can be the ensemble cross entropy or the average ensemble-member cross entropy (in our experiments, we will use the latter as recent results suggests it often generalizes better [17]). The introduction of one distribution pt per ensemble member also affects the validation step of the alternating optimization, in particular we adapt (6) to become min {ξk,t}Kk=1 EΛK∼qt,(x,y)∈Dval [ `val({fΘ(x,λk)}Kk=1, y)− τH [ qt ( ΛK )]] . (9) Note that the extensions (8)-(9) with K = 1 fall back to the standard formulation of [52]. In our experiments, we take Ω to be L2 regularizers applied to the parameters Wk(λk) and bk(λk) of each ensemble member. In Appendix B.2, we show how to efficiently vectorize the computation of Ω across the ensemble members and mini-batches of {λk}Kk=1 sampled from qt, as required by (8). In practice, we use one sample of ΛK for each data point in the batch: for MLP/LeNet (Section 5.1), we use 256, while for ResNet-20/W. ResNet-28-10 (Section 5.2), we use 512 (64 for each of 8 workers). Definition of pt. In the experiments of Section 5, we will manipulate hyperparameters λ that are positive and bounded (e.g., a dropout rate). For each ensemble member with hyperparameters λk ∈ Rm, we thus define its distribution pt(λk) = p(λk|ξk,t) to be m independent log-uniform distributions (one per dimension in λk), which is a standard choice for hyperparameter tuning, e.g., [5, 6, 53]. With this choice, ξk,t contains 2m parameters, namely the bounds of the ranges of the m distributions. Similar to [52], at prediction time, we take λk to be equal to the means λmeank of the distributions pt(λk). In Appendix B.3, we provide additional details about pt. The validation steps (6) and (9) seek to optimize the bounds of the ranges. More specifically, the loss `val favors compact ranges around a good hyperparameter value whereas the entropy term encourages wide ranges, as traded off by τ . We provide an example of the optimization trajectory of λ and its range in Figure 2-(right), where λ corresponds to the mean of the log-uniform distribution. 5 Experiments Throughout the experiments, we use both metrics that depend on the predictive uncertainty—negative log-likelihood (NLL) and expected calibration error (ECE) [55]—and metrics that do not, e.g., the classification accuracy. The supplementary material also reports Brier score [10] (for which we typically observed a strong correlation with NLL). Moreover, as diversity metric, we take the predictive disagreement of the ensemble members normalized by (1-accuracy), as used in [22]. In the tables, we write the number of ensemble members in brackets “(·)” next to the name of the methods. 5.1 Multi-layer perceptron and LeNet on Fashion MNIST & CIFAR-100 To validate our approaches and run numerous ablation studies, we first focus on small-scale models, namely MLP and LeNet [44], over CIFAR-100 [40] and Fashion MNIST [73]. For both models, we add a dropout layer [66] before their last layer. For each pair of dataset/model type, we consider two tuning settings involving the dropout rate and different L2 regularizers defined with varied granularity, e.g., layerwise. Appendix C.1 gives all the details about the training, tuning and dataset definitions. Baselines. We compare our methods (i) hyper-deep ens: hyper-deep ensemble of Section 3 and (ii) hyper-batch ens: hyper-batch ensemble of Section 4, to (a) rand search: the best single model after 50 trials of random search [6], (b) Bayes opt: the best single model after 50 trials of Bayesian optimization [63, 27], (c) deep ens: deep ensemble [43] using the best hyperparameters found by random search, (d) batch ens: batch ensemble [69], (e) STN: self-tuning networks [52], and (f) fixed init hyper ens: defined in Section 3. The supplementary material details how we tune the hyperparameters specific to batch ens, STN and hyper-batch ens (see Appendix C.2, Appendix C.3 and Appendix C.4 and further ablations about e in Appendix C.5 and τ in Appendix C.6). Note that batch ens needs the tuning of its own hyperparameters and those of the MLP/LeNet models, while STN and hyper-batch ens automatically tune the latter. We highlight below the key conclusions from Table 1 with single models and ensemble of sizes 3. The same conclusions can also be drawn for the ensemble of size 5 (see Appendix C.7.1). Ensembles benefit from both weight and hyperparameter diversity. With the pictorial view of Figure 2 in mind, fixed init hyper ens, i.e., a “row”, tends to outperform deep ens, i.e., a “column”. Moreover, those two approaches (as well as the other methods of the benchmark) are outperformed by our stratified procedure hyper-deep ens, demonstrating the benefit of combining hyperparameter and initialization diversity (see Appendix C.7.2 for the detailed assessment of the statistical significance). In Appendix C.7.3, we study more specifically the diversity and we show that hyper-deep ens has indeed more diverse predictions than deep ens. Efficient ensembles benefit from both weight and hyperparameter diversity. Among the efficient approaches (the three rightmost columns of Table 1), hyper-batch ens performs best. It improves upon both STN and batch ens, the two methods it builds upon. In line with [52], STN typically matches or improves upon rand search and Bayes opt. As explained in Section 4.1, hyper-batch ens has however twice the number of parameters of batch ens. In Appendix C.7.4, we thus compare with a “deep ensemble of two batch ensembles” (i.e., resulting in the same number of parameters but twice as many members as for hyper-batch ens). In that case, hyper-batch ens also either improves upon or matches the performance of the combination of two batch ens. 5.2 ResNet-20 and Wide ResNet-28-10 on CIFAR-10 & CIFAR-100 We evaluate our approach in a large-scale setting with ResNet-20 [31] and Wide ResNet 28-10 models [74] as they are simple architectures with competitive performance on image classification tasks. We consider six different L2 regularization hyperparameters (one for each block of the ResNet) and a label smoothing hyperparameter. We show results on CIFAR-10, CIFAR-100 and corruptions on CIFAR-10 [33, 64]. Moreover, in Appendix D.3, we provide additional out-of-distribution evaluations along the line of [32]. Further details about the experiment settings can be found in Appendix D. CIFAR-10/100. We compare hyper-deep ens with a single model (tuned as next explained) and deep ens of varying ensemble sizes. Our hyper-deep ens is constructed based on 100 trials of random search while deep ens and single take the best hyperparameter configuration found by the random search procedure. Figure 1 displays the results on CIFAR-100 along with the standard errors and shows that throughout the ensemble sizes, there is a substantial performance improvement of hyper-deep ensembles over deep ensembles. The results for CIFAR-10 are shown in Appendix D where hyper-deep ens leads to consistent but smaller improvements, e.g., in terms of NLL. We next fix the ensemble size to four and compare the performance of hyper-batch ens with the direct competing method batch ens, as well as with hyper-deep ens, deep ens and single. The results are reported in Table 2. On CIFAR-100, hyper-batch ens improves, or matches, batch ens across all metrics. For instance, in terms of NLL, it improves upon batch ens by about 7% and 2% for ResNet-20 and Wide ResNet 28-10 respectively. Moreover, the members of hyper-batch ens make more diverse predictions than those of batch ens. On CIFAR-10 hyper-batch ens also achieves a consistent improvement, though less pronounced (see Table 2). On the same Wide ResNet 28-10 benchmark, with identical training and evaluation pipelines (see https://github.com/google/uncertainty-baselines), variational inference [70] leads to (NLL, ACC, ECE)=(0.211, 0.947, 0.029) and (NLL, ACC, ECE)=(0.944, 0.778, 0.097) for CIFAR-10 and CIFAR-100 respectively, while Monte Carlo dropout [23] gets (NLL, ACC, ECE)=(0.160, 0.959, 0.024) and (NLL, ACC, ECE)=(0.830, 0.776, 0.050) for CIFAR-10 and CIFAR-100 respectively. We can finally look at how the joint training in hyper-batch ens leads to complementary ensemble members. For instance, for Wide ResNet 28-10 on CIFAR-100, while the ensemble performance are (NLL, ACC)=(0.678, 0.820) (see Table 2), the individual members obtain substantially poorer performance, as measured by the average ensemble-member metrics (NLL, ACC)=(0.904, 0.788). Training time and memory cost. Both in terms of the number of parameters and training time, hyper-batch ens is about twice as costly as batch ens. For CIFAR-100, hyper-batch ens takes 2.16 minutes/epoch and batch ens 1.10 minute/epoch. More details are available in Appendix D.6. Calibration on out of distribution data. We measure the calibrated prediction on corrupted datasets, which is a type of out-of-distribution examples. We consider the recently published dataset by [33], which consists of over 30 types of corruptions to the images of CIFAR-10. A similar benchmark can be found in [64]. On Figure 3, we find that all ensembles methods improve upon the single model. The mean accuracies are similar for all ensemble methods, whereas hyper-batch ens shows more robustness than batch ens as it typically leads to smaller worst values (see bottom whiskers in Figure 3). Plots for calibration error and NLL can be found in Appendix D.5. 6 Discussion We envision several promising directions for future research. Towards more compact parametrization. In this work, we have used the layers from [52] that lead to a 2x increase in memory compared with standard layers. In lieu of (3), low-rank parametrizations, e.g., W + ∑h j=1 ej(λ)gjh > j , would be appealing to reduce the memory footprint of selftuning networks and hyper-batch ensembles. We formally show in Appendix E that this family of parametrizations is well motivated in the case of shallow models where they enjoy good approximation guarantees. Architecture diversity. Our proposed hyperparameter ensembles provide diversity with respect to hyperparameters related to regularization and optimization. We would like to go further in ensembling very different functions in the search space, such as network width, depth [2], and the choice of residual block. Doing so connects to older work on Bayesian marginalization over structures [37, 1]. More broadly, we can wonder what other types of diversity matter to endow deep learning models with better uncertainty estimates? Broader Impact Our work belongs to a broader research effort that tries to quantify the predictive uncertainty for deep neural networks. Those models are known to generalize poorly to small changes to the data while maintaining high confidence in their predictions. Who may benefit from this research? The broader topic of our work is becoming increasingly important in a context where machine learning systems are being deployed in safety-critical fields, e.g., medical diagnosis [54, 49] and self-driving cars [48]. Those examples would benefit from the general technology we contribute to. In those cases, it is essential to be able to reliably trust the uncertainty output by the models before any decision-making process, to possibly escalate uncertain decisions to appropriate human operators. Who may be put at disadvantage from this research? We are not aware of a group of people that may be put at disadvantage as a result of this direct research. What are the consequences of failure of the system? By definition, our research could contribute to aspects of machine-learning systems used in high-risk domains (e.g., we mentioned earlier medical fields and self-driving cars) which involves complex data-driven decision-making processes. Depending on the nature of the application at hand, a failure of the system could lead to extremely negative consequences. A case in point is the recent screening system used by one third of UK government councils to allocate welfare budget. 1 Do the task/method leverage biases in the data? The method we develop in this work is domainagnostic and does not rely on specific data assumptions. Our method also does not contain components that would prevent its combination with existing fairness or privacy-preserving technologies [4]. Acknowledgments We would like to thank Nicolas Le Roux, Alexey Dosovitskiy and Josip Djolonga for insightful discussions at earlier stages of this project. Moreover, we would like to thank Sebastian Nowozin, Klaus-Robert Müller and Balaji Lakshminarayanan for helpful comments on a draft of this paper.
1. What is the main contribution of the paper, and how does it extend beyond WideResNet? 2. What are the strengths of the paper, particularly in terms of its empirical evaluation and novel approach? 3. Do you have any concerns or questions regarding the idea's novelty, specifically in relation to forced initialization points? 4. Can you clarify the explanation for the change from O(mk) to O(k^2) in page 5, line 172? 5. How significant is the improvement in using stratified hyper-ensembles over other methods, and what is the meaning of the numbers in brackets (1), (4) in Tables #1 and #2? 6. Are there any other deep models, such as VGG, ResNet, or DenseNet, that could be used to support the empirical evaluation and demonstrate the method's versatility? 7. How do the results of the comparison between two deep ensembles for WideResNet impact the paper's overall findings, as mentioned in lines 269-272?
Summary and Contributions Strengths Weaknesses
Summary and Contributions ++ Post Rebuttal I'm happy with the rebuttal which clarified some points about the paper. The extra experiments show that the method extends beyond WideResNet too. For this I'm raising my score. ++ This paper discusses the idea of combining ensemble through hyper-parameters and different initialization of a deep model. The paper applied this idea on two directions. The first one is on deep ensembles in which they introduced stratified hyper ensemble as a greedy search algorithm that updates over hyper-ensemble method. The second direction is in applying the idea on batch ensembles, which is a budget wise ensemble mechanism. They introduced batch hyper ensemble which 2x size of hyper ensemble. Also, it merged the idea of self-tuning networks into batch-hyper-ensemble to make an efficient upgrade-non greedy approach for batch-ensemble. Strengths I have read the paper several times(4-5) to make sure of these points: - I'm not sure if the idea is novel itself in this area or not but it seems valid. - The empirical evaluation shows an improvement over previous methods and it covered the comparison with different alternatives such as deep ensembles and batch ensembles. - The analysis of the pictorial view in figure 2 where they showed that deep ensembles and hyper ensembles parameters search are a special case of the method. - The upgrade of self-tuning networks to match K ensembles such as in equation 7 and the updated objective function in equation 8 is another contribution of the paper. Weaknesses I do have several questions: - Did the hyper-ensemble paper forced the networks to start form the same initialization point? I looked into the paper in ref[12] for this information but couldn't tell. If so, then the work will need a more justification with the difference w.r.t hyper-ensemble. - In page 5, starting from line 172 it was not clear why o(mk) became o(k^2), can you elaborate more on this? - From table#1 and table#2 it seems that hyper-ens, str hyper ens and deep ens are quite close to each other in nll,acc,ece ranges. What's exactly the range if improvement of using str-hyper-ens over the others? - In tables#1,2 what is the meaning of the numbers in brackets (1),(4)? - I understand that the empirical evaluation is expensive, but reporting results on other deep models such as VGG, ResNet, DenseNet for a small subset of the settings will clear any doubts regards that the method only works best for wide-resnet. - On the same point of wide-resnet as in lines 269-272 for using two deep ensembles, what are the results of this comparison for wideresnet?
NIPS
Title Hyperparameter Ensembles for Robustness and Uncertainty Quantification Abstract Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, we improve upon both deep and batch ensembles. N/A 1 Introduction Neural networks are well-suited to form ensembles of models [30]. Indeed, neural networks trained from different random initialization can lead to equally wellperforming models that are nonetheless diverse in that they make complementary errors on held-out data [30]. This property is explained by the multi-modal nature of their loss landscape [24] and the randomness induced by both their initialization and the stochastic methods commonly used to train them [8, 38, 9]. Many mechanisms have been proposed to further foster diversity in ensembles of neural networks, e.g., based on cyclical learning rates [36] or Bayesian analysis [17]. In this paper, we focus on exploiting the diversity induced by combining neural networks defined by different hyperparameters. This concept is already wellestablished [13] and the auto-ML community actively applies it [21, 65, 53, 46]. We build upon this research with the following two complementary goals. First, for performance independent of computational and memory budget, we seek to improve upon deep ensembles [43], the current state-of-the-art ensembling method in terms of robustness and uncertainty quantification [64, 28]. To this end, we develop a simple stratification scheme which combines random search and the greedy selection of hyperparameters from [13] with the benefit 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. of multiple random initializations per hyperparameter like in deep ensembles. Figure 1 illustrates our algorithm for a Wide ResNet 28-10 where it leads to substantial improvements, highlighting the benefits of combining different initialization and hyperparameters. Second, we seek to improve upon batch ensembles [69], the current state-of-the-art in efficient ensembles. To this end, we propose a parameterization combining that of [69] and self-tuning networks [52], which enables both weight and hyperparameter diversity. Our approach is a drop-in replacement that outperforms batch ensembles and does not need a separate tuning of the hyperparameters. 1.1 Related work Ensembles over neural network weights. Combining the outputs of several neural networks to improve their single performance has a long history, e.g., [47, 30, 25, 41, 58, 15]. Since the quality of an ensemble hinges on the diversity of its members [30], many mechanisms were developed to generate diverse ensemble members. For instance, cyclical learning-rate schedules can explore several local minima [36, 76] where ensemble members can be snapshot. Other examples are MC dropout [23] or the random initialization itself, possibly combined with the bootstrap [45, 43]. More generally, Bayesian neural networks can be seen as ensembles with members being weighted by the (approximated) posterior distribution over the parameters [34, 51, 56, 7, 71, 72]. Hyperparameter ensembles. Hyperparameter-tuning methods [20] typically produce a pool of models from which ensembles can be constructed post hoc, e.g., [65]. This idea has been made systematic as part of auto-sklearn [21] and successfully exploited in several other contexts, e.g., [19] and specifically for neural networks [53] as well as in computer vision [60] and genetics [35]. In particular, the greedy ensemble construction from [13] (and later variations thereof [12]) was shown to work best among other algorithms, either more expensive or more prone to overfitting. To the best of our knowledge, such ensembles based on hyperparameters have not been studied in the light of predictive uncertainty. Moreover, we are not aware of existing methods to efficiently build such ensembles, similarly to what batch ensembles do for deep ensembles. Finally, recent research in Bayesian optimization has also focused on directly optimizing the performance of the ensemble while tuning the hyperparameters [46]. Hyperparameter ensembles also connect closely to probabilistic models over structures. These works often analyze Bayesian nonparametric distributions, such as over depth and width of a neural network, leveraging Markov chain Monte Carlo for inference [37, 1, 18, 42]. In this work, we examine more parametric assumptions, building on the success of variational inference and mixture distributions: for example, the validation step in hyper-batch ensemble can be viewed as a mixture variational posterior and the entropy penalty is the ELBO’s KL divergence toward a uniform prior. Concurrent to our paper, [75] construct neural network ensembles within the context of neural architecture search, showing improved robustness for predictions with distributional shift. One of their methods, NES-RS, has similarities with our hyper-deep ensembles (see Section 3), also relying on both random search and [13] to form ensembles, but do not stratify over different initializations. We vary the hyperparameters while keeping the architecture fixed while [75] study the converse. Furthermore, [75] do not explore a parameter- and computationally-efficient method (see Section 4). Efficient hyperparameter tuning & best-response function. Some hyperparameters of a neural network, e.g., its L2 regularization parameter(s), can be optimized by estimating the best-response function [26], i.e., the mapping from the hyperparameters to the parameters of the neural networks solving the problem at hand [11]. Learning this mapping is an instance of learning an hypernetwork [61, 62, 29] and falls within the scope of bilevel optimization problems [14]. Because of the daunting complexity of this mapping, [50, 52] proposed scalable local approximations of the best-response function. Similar methodology was also employed for style transfer and image compression [3, 16]. The self-tuning networks from [52] are an important building block of our approach wherein we extend their setting to the case of an ensemble over different hyperparameters. 1.2 Contributions We examine two regimes to exploit hyperparameter diversity: (a) ensemble performance independent of budget and (b) ensemble performance seeking parameter efficiency, where, respectively, deep and batch ensembles [43, 69] are state-of-the-art. We propose one ensemble method for each regime: (a) Hyper-deep ensembles. We define a greedy algorithm to form ensembles of neural networks exploiting two sources of diversity: varied hyperparameters and random initialization. By stratifying models with respect to the latter, our algorithm subsumes deep ensembles that we outperform in our experiments. Our approach is a simple, strong baseline that we hope will be used in future research. (b) Hyper-batch ensembles. We efficiently construct ensembles of neural networks defined over different hyperparameters. Both the ensemble members and their hyperparameters are learned endto-end in a single training procedure, directly maximizing the ensemble performance. Our approach outperforms batch ensembles and generalizes the layer structure of [52] and [69], while keeping their original memory compactness and efficient minibatching for parallel training and prediction. We illustrate the benefits of our two ensemble methods on image classification tasks, with multi-layer perceptron, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, in terms of both predictive performance and uncertainty. The code for generic hyper-batch ensemble layers can be found in https://github.com/google/edward2 and the code to reproduce the experiments of Section 5.2 is part of https://github.com/google/uncertainty-baselines. 2 Background We introduce notation and background required to define our approach. Consider an i.i.d. classification setting with data D = {(xn, yn)}Nn=1 where xn ∈ Rd is the feature vector corresponding to the n-th example and yn its class label. We seek to learn a classifier in the form of a neural network fθ where all its parameters (weights and bias terms) are summarized in θ ∈ Rp. In addition to its primary parameters θ, the model fθ will also depend on m hyperparameters that we refer to as λ ∈ Rm. For instance, an entry in λ could correspond to the dropout rate of a given layer in fθ. Equipped with some loss function `, e.g., the cross entropy, and some regularization term Ω(·,λ), e.g., the squared L2 norm with a strength defined by an entry of λ, we are interested in θ̂(λ) ∈ arg min θ∈Rp E(x,y)∈D [ L(x, y,θ,λ) ] with L(x, y,θ,λ) = `(fθ(x,λ), y) + Ω(θ,λ), (1) where E(x,y)∈D[·] stands for the expectation with a uniform distribution over D. As we shall see in Section 5, the loss ` = `λ can also depend on λ, for instance to control a label smoothing parameter [67]. In general, λ is chosen based on some held-out evaluation metric by grid search, random search [6] or more sophisticated hyperparameter-tuning methods [20]. 2.1 Deep ensembles and batch ensembles Deep ensembles [43] are a simple ensembling method where neural networks with different random initialization are combined. Deep ensembles lead to remarkable predictive performance and robust uncertainty estimates [64, 28]. Given some hyperparameters λ0, a deep ensemble of size K amounts to solving K times (1) with random initialization and aggregating the outputs of {fθ̂k(λ0)(·,λ0)} K k=1. Batch ensembles [69] are a state-of-the-art efficient alternative to deep ensembles, preserving their performance while reducing their computational and memory burden. To simplify the presentation, we focus on the example of a dense layer in fθ , with weight matrix W ∈ Rr×s where r and s denote the input and output dimensions of the layer respectively. A deep ensemble of size K needs to train, predict with, and store K weight matrices {Wk}Kk=1. Instead, batch ensembles consider a single matrix W ∈ Rr×s together with two sets of auxiliary vectors [r1, . . . , rK ] ∈ Rr×K and [s1, . . . , sK ] ∈ Rs×K such that the role of Wk is played by W ◦ (rks>k ) for each k ∈ {1, . . . ,K}, (2) where we denote by ◦ the element-wise product (which we will broadcast row-wise or column-wise depending on the shapes at play). Not only does (2) lead to a memory saving, but it also allows for efficient minibatching, where each datapoint may use a different ensemble member. Given a batch of inputs X ∈ Rb×r, the predictions for the k-th member equal X[W ◦ (rks>k )] = [(X ◦ r>k )W] ◦ s>k . By properly tiling the batch X, the K members can thus predict in parallel in one forward pass [69]. 2.2 Self-tuning networks Hyperparameter tuning typically involves multiple runs of the training procedure. One efficient alternative [50, 52] is to approximate the best-response function, i.e., the mapping from λ to optimal parameters θ̂(λ). The local approximation of [52] captures the changes of λ by scaling and shifting the hidden units of fθ , which requires in turn extra parameters θ′ ∈ Rp ′ , summarized in Θ = {θ,θ′}. [52] call the resulting approach self-tuning network since fΘ tunes online its own hyperparameters λ. In the sequel, λ will be continuous such as dropout rates, L2 penalties and label smoothing. Example of the dense layer. We illustrate the choice and role of θ′ in the example of a dense layer (the convolutional layer is similar to [59]; see details in [52]). The weight matrix W ∈ Rr×s and bias b ∈ Rs of a dense layer are defined as (with ∆ and δ of the same shapes as W and b respectively), W(λ) = W + ∆ ◦ e(λ)> and b(λ) = b + δ ◦ e′(λ), (3) where e(λ) ∈ Rs and e′(λ) ∈ Rs are real-valued embeddings of λ. In [52], the embedding is linear, i.e., e(λ) = Cλ and e′(λ) = C′λ. In this example, we have original parameters θ = {W,b} as well as the additional parameters θ′ = {∆, δ,C,C′}. Training objective. Since θ′ captures changes in θ induced by changes in λ, [50, 52] replace the typical objective (1), defined for a single value of λ, with an expected objective [50, 52, 16], min Θ∈Rp+p′ Eλ∼p(λ),(x,y)∈D [ L(x, y,Θ,λ) ] , (4) where p(λ) denotes some distribution over the hyperparameters λ. When p is kept fixed during the optimization of (4), the authors of [50] observed that θ̂(λ) is not well approximated and proposed instead to use a distribution pt(λ) = p(λ|ξt) varying with the iteration t. In our work we choose p(·|ξt) to be a log-uniform distribution with ξt containing the bounds of the ranges of λ (see Section 4). The key benefit from (4) is that a single (though, more costly) training gives access to a mapping λ 7→ fΘ̂(·,λ) which approximates the behavior of fΘ̂ for hyperparameters in the support of p(λ). Alternating optimization. The procedure followed by [52] consists in alternating between training and tuning steps. First, the training step performs a stochastic gradient update of Θ in (4), jointly sampling λ ∼ p(λ|ξt) and (x, y) ∈ D. Second, the tuning step makes a stochastic gradient update of ξt by minimizing some validation objective (e.g., the cross entropy): min ξt Eλ∼p(λ|ξt),(x,y)∈Dval [ `val(fΘ(x,λ), y) ] . (5) In (5), derivatives are taken through samples λ ∼ p(λ|ξt) by applying the reparametrization trick [39]. To prevent p(λ|ξt) from collapsing to a degenerate distribution, and inspired by variational inference, the authors of [52] add an entropy regularization termH[·] controlled by τ ≥ 0 so that (5) becomes min ξt Eλ∼p(λ|ξt),(x,y)∈Dval [ `val(fΘ(x,λ), y)− τH[p(λ|ξt)] ] . (6) 3 Hyper-deep ensembles Figure 2-(left) visualizes different models fθ(·,λ) according to their hyperparameters λ along the x-axis and their initialization θinit. on the y-axis. In this view, a deep ensemble corresponds to a “column” where models with different random initialization are combined together, for a fixed λ. On the other hand, a “row” corresponds to the combination of models with different hyperparameters. Such a “row” typically stems from the application of some hyperparameter-tuning techniques [20]. Fixed initialization hyper ensembles. Given the simplicity, broad applicability, and performance of the greedy algorithm from [13]—e.g., in auto-ML settings [21], we use it as our canonical procedure to generate a “row”, i.e., an ensemble of neural networks with fixed parameter initialization and various hyperparameters. We refer to it as fixed init hyper ensemble. For completeness, we recall the procedure from [13] in Appendix A (Algorithm 2, named hyper_ens). Given an input set of models (e.g., from random search), hyper_ens greedily grows an ensemble until some target size K is met by selecting the model with the best improvement of some score, e.g., the validation log-likelihood. We select the models with replacement to be able to learn weighted combinations thereof (see Section 2.1 in [13]). Note that the procedure from [13] does not require the models to have a fixed initialization: we consider here a fixed initialization to isolate the effect of just varying the hyperparameters (while deep ensembles vary only the initialization, with fixed hyperparameters). Our goal is two-fold: (a) we want to demonstrate the complementarity of random initialization and hyperparameters as sources of diversity in the ensemble, and (b) design a simple algorithmic scheme that exploits both sources of diversity while encompassing the construction of deep ensembles as a subcase. We defer to Section 5 the study of (a) and next focus on (b). Hyper-deep ensembles. We proceed in three main steps, as summarized in Algorithm 1. In lines 1-2, we first generate one “row” according to hyper_ens based on the results of random search [6] as input. We then tile and stratify that “row” by training the models for different random initialization (see lines 4-7). The resulting set of models is illustrated in Figure 2-(left). In line 10, we finally re-apply hyper_ens on that stratified set of models to extract an ensemble that can exploit the two sources of diversity. By design, a deep ensemble is one possible outcome of this procedure—one “column”—and so is fixed init hyper ensemble described in the previous paragraph—one “row”. Algorithm 1: hyper_deep_ens(K,κ) 1 M0 = {fθj (·,λj)}κj=1←− rand_search(κ); 2 E0 ←− hyper_ens(M0, K) and Estrat. = { }; 3 foreach fθ(·,λ) ∈ E0.unique() do 4 foreach k ∈ {1, . . . ,K} do 5 θ′ ←− random initialization; 6 fθk(·,λ)←− train fθ′(·,λ); 7 Estrat. = Estrat. ∪ { fθk(·,λ)}; 8 end 9 end 10 return hyper_ens(Estrat., K); In lines 1-2, running random search leads to a set of κ models (i.e.,M0). If we were to stratify all of them, we would need K seeds for each of those κ models, hence a total of O(κK) models to train. However, we first apply hyper_ens to extract K models out of the κ available ones, with K κ. The stratification then needs K seeds for each of those K models (lines 4-7), thus O(K2) models to train. We will see in Section 5 that even with standard hyperparameters, e.g., dropout or L2 parameters, Algorithm 1 can lead to substantial improvements over deep ensembles. In Appendix C.7.5, we conduct ablation studies to relate to the top-K strategy used in [60] and NES-RS from [75]. 4 Hyper-batch ensembles This section presents our efficient approach to construct ensembles over different hyperparameters. 4.1 Composing the layer structures of batch ensembles and self-tuning networks The core idea lies in the composition of the layers used by batch ensembles [69] for ensembling parameters and self-tuning networks [52] for parameterizing the layer as an explicit function of hyperparameters. The composition preserves complementary features from both approaches. We continue the example of the dense layer from Section 2.1-Section 2.2. The convolutional layer is described in Appendix B.1. Assuming an ensemble of size K, we have for k ∈ {1, . . . ,K} Wk(λk) = W ◦ (rks>k ) + [∆ ◦ (ukv>k )] ◦ e(λk)> and bk(λk) = bk + δk ◦ e′(λk), (7) where the rk’s (respectively, uk’s) in Rr and sk’s (respectively, vk’s) in Rs are vectors which diversify the shared matrix W (respectively, ∆) in Rr×s; and the bk’s in Rs and δk’s in Rs are the bias terms for each of the K ensemble members. We comment on some important properties of (7): • As noted by [69], formulation (2) includes a set of rank-1 factors which diversify individual ensemble member weights. In (7), the rank-1 factors rks>k and ukv > k capture this weight diversity for each respective term. • As noted by [52], formulation (3) captures local hyperparameter variations in the vicinity of some λ. The term [∆ ◦ (ukv>k )] ◦ e(λk)> in (7) extends this behavior to the vicinity of the K hyperparameters {λ1, . . . ,λK} indexing the K ensemble members. • Equation (7) maintains the compactness of the original layers of [52, 69] with a resulting memory footprint about twice as large as [69] and equivalent to [52] up to the rank-1 factors. • Given K hyperparameters {λ1, . . . ,λK} and a batch of inputs X ∈ Rb×r, the structure of (7) preserves the efficient minibatching of [69]. If 1b is the vector of ones in Rb, we can tile X, 1bλ>k and 1be(λk) >, enabling all K members to predict in a single forward pass. • From an implementation perspective, (7) enables direct reuse of existing code, e.g., DenseBatchEnsemble and Conv2DBatchEnsemble from [68]. The implementation of our layers can be found in https://github.com/google/edward2. 4.2 Objective function: from single model to ensemble We first need to slightly overload the notation from Section 2.2 and we write fΘ(x,λk) to denote the prediction for the input x of the k-th ensemble member indexed by λk. In Θ, we pack all the parameters of f , as those described in the example of the dense layer in Section 4.1. In particular, predicting with λk is understood as using the corresponding parameters {Wk(λk),bk(λk)} in (7). Training and validation objectives. We want the ensemble members to account for a diverse combination of hyperparameters. As a result, each ensemble member is assigned its own distribution of hyperparameters, which we write pt(λk) = p(λk|ξk,t) for k ∈ {1, . . . ,K}. Along the line of (4), we consider an expected training objective which now simultaneously operates over ΛK = {λk}Kk=1 min Θ EΛK∼qt,(x,y)∈D [ L(x, y,Θ,ΛK) ] with qt ( ΛK ) = q(ΛK |{ξk,t}Kk=1) = K∏ k=1 pt(λk) (8) and where L, compared with (1), is extended to handle the ensemble predictions L(x, y,Θ,ΛK) = ` ( {fΘ(x,λk)}Kk=1, y ) + Ω ( Θ, {λk}Kk=1 ) . For example, the loss ` can be the ensemble cross entropy or the average ensemble-member cross entropy (in our experiments, we will use the latter as recent results suggests it often generalizes better [17]). The introduction of one distribution pt per ensemble member also affects the validation step of the alternating optimization, in particular we adapt (6) to become min {ξk,t}Kk=1 EΛK∼qt,(x,y)∈Dval [ `val({fΘ(x,λk)}Kk=1, y)− τH [ qt ( ΛK )]] . (9) Note that the extensions (8)-(9) with K = 1 fall back to the standard formulation of [52]. In our experiments, we take Ω to be L2 regularizers applied to the parameters Wk(λk) and bk(λk) of each ensemble member. In Appendix B.2, we show how to efficiently vectorize the computation of Ω across the ensemble members and mini-batches of {λk}Kk=1 sampled from qt, as required by (8). In practice, we use one sample of ΛK for each data point in the batch: for MLP/LeNet (Section 5.1), we use 256, while for ResNet-20/W. ResNet-28-10 (Section 5.2), we use 512 (64 for each of 8 workers). Definition of pt. In the experiments of Section 5, we will manipulate hyperparameters λ that are positive and bounded (e.g., a dropout rate). For each ensemble member with hyperparameters λk ∈ Rm, we thus define its distribution pt(λk) = p(λk|ξk,t) to be m independent log-uniform distributions (one per dimension in λk), which is a standard choice for hyperparameter tuning, e.g., [5, 6, 53]. With this choice, ξk,t contains 2m parameters, namely the bounds of the ranges of the m distributions. Similar to [52], at prediction time, we take λk to be equal to the means λmeank of the distributions pt(λk). In Appendix B.3, we provide additional details about pt. The validation steps (6) and (9) seek to optimize the bounds of the ranges. More specifically, the loss `val favors compact ranges around a good hyperparameter value whereas the entropy term encourages wide ranges, as traded off by τ . We provide an example of the optimization trajectory of λ and its range in Figure 2-(right), where λ corresponds to the mean of the log-uniform distribution. 5 Experiments Throughout the experiments, we use both metrics that depend on the predictive uncertainty—negative log-likelihood (NLL) and expected calibration error (ECE) [55]—and metrics that do not, e.g., the classification accuracy. The supplementary material also reports Brier score [10] (for which we typically observed a strong correlation with NLL). Moreover, as diversity metric, we take the predictive disagreement of the ensemble members normalized by (1-accuracy), as used in [22]. In the tables, we write the number of ensemble members in brackets “(·)” next to the name of the methods. 5.1 Multi-layer perceptron and LeNet on Fashion MNIST & CIFAR-100 To validate our approaches and run numerous ablation studies, we first focus on small-scale models, namely MLP and LeNet [44], over CIFAR-100 [40] and Fashion MNIST [73]. For both models, we add a dropout layer [66] before their last layer. For each pair of dataset/model type, we consider two tuning settings involving the dropout rate and different L2 regularizers defined with varied granularity, e.g., layerwise. Appendix C.1 gives all the details about the training, tuning and dataset definitions. Baselines. We compare our methods (i) hyper-deep ens: hyper-deep ensemble of Section 3 and (ii) hyper-batch ens: hyper-batch ensemble of Section 4, to (a) rand search: the best single model after 50 trials of random search [6], (b) Bayes opt: the best single model after 50 trials of Bayesian optimization [63, 27], (c) deep ens: deep ensemble [43] using the best hyperparameters found by random search, (d) batch ens: batch ensemble [69], (e) STN: self-tuning networks [52], and (f) fixed init hyper ens: defined in Section 3. The supplementary material details how we tune the hyperparameters specific to batch ens, STN and hyper-batch ens (see Appendix C.2, Appendix C.3 and Appendix C.4 and further ablations about e in Appendix C.5 and τ in Appendix C.6). Note that batch ens needs the tuning of its own hyperparameters and those of the MLP/LeNet models, while STN and hyper-batch ens automatically tune the latter. We highlight below the key conclusions from Table 1 with single models and ensemble of sizes 3. The same conclusions can also be drawn for the ensemble of size 5 (see Appendix C.7.1). Ensembles benefit from both weight and hyperparameter diversity. With the pictorial view of Figure 2 in mind, fixed init hyper ens, i.e., a “row”, tends to outperform deep ens, i.e., a “column”. Moreover, those two approaches (as well as the other methods of the benchmark) are outperformed by our stratified procedure hyper-deep ens, demonstrating the benefit of combining hyperparameter and initialization diversity (see Appendix C.7.2 for the detailed assessment of the statistical significance). In Appendix C.7.3, we study more specifically the diversity and we show that hyper-deep ens has indeed more diverse predictions than deep ens. Efficient ensembles benefit from both weight and hyperparameter diversity. Among the efficient approaches (the three rightmost columns of Table 1), hyper-batch ens performs best. It improves upon both STN and batch ens, the two methods it builds upon. In line with [52], STN typically matches or improves upon rand search and Bayes opt. As explained in Section 4.1, hyper-batch ens has however twice the number of parameters of batch ens. In Appendix C.7.4, we thus compare with a “deep ensemble of two batch ensembles” (i.e., resulting in the same number of parameters but twice as many members as for hyper-batch ens). In that case, hyper-batch ens also either improves upon or matches the performance of the combination of two batch ens. 5.2 ResNet-20 and Wide ResNet-28-10 on CIFAR-10 & CIFAR-100 We evaluate our approach in a large-scale setting with ResNet-20 [31] and Wide ResNet 28-10 models [74] as they are simple architectures with competitive performance on image classification tasks. We consider six different L2 regularization hyperparameters (one for each block of the ResNet) and a label smoothing hyperparameter. We show results on CIFAR-10, CIFAR-100 and corruptions on CIFAR-10 [33, 64]. Moreover, in Appendix D.3, we provide additional out-of-distribution evaluations along the line of [32]. Further details about the experiment settings can be found in Appendix D. CIFAR-10/100. We compare hyper-deep ens with a single model (tuned as next explained) and deep ens of varying ensemble sizes. Our hyper-deep ens is constructed based on 100 trials of random search while deep ens and single take the best hyperparameter configuration found by the random search procedure. Figure 1 displays the results on CIFAR-100 along with the standard errors and shows that throughout the ensemble sizes, there is a substantial performance improvement of hyper-deep ensembles over deep ensembles. The results for CIFAR-10 are shown in Appendix D where hyper-deep ens leads to consistent but smaller improvements, e.g., in terms of NLL. We next fix the ensemble size to four and compare the performance of hyper-batch ens with the direct competing method batch ens, as well as with hyper-deep ens, deep ens and single. The results are reported in Table 2. On CIFAR-100, hyper-batch ens improves, or matches, batch ens across all metrics. For instance, in terms of NLL, it improves upon batch ens by about 7% and 2% for ResNet-20 and Wide ResNet 28-10 respectively. Moreover, the members of hyper-batch ens make more diverse predictions than those of batch ens. On CIFAR-10 hyper-batch ens also achieves a consistent improvement, though less pronounced (see Table 2). On the same Wide ResNet 28-10 benchmark, with identical training and evaluation pipelines (see https://github.com/google/uncertainty-baselines), variational inference [70] leads to (NLL, ACC, ECE)=(0.211, 0.947, 0.029) and (NLL, ACC, ECE)=(0.944, 0.778, 0.097) for CIFAR-10 and CIFAR-100 respectively, while Monte Carlo dropout [23] gets (NLL, ACC, ECE)=(0.160, 0.959, 0.024) and (NLL, ACC, ECE)=(0.830, 0.776, 0.050) for CIFAR-10 and CIFAR-100 respectively. We can finally look at how the joint training in hyper-batch ens leads to complementary ensemble members. For instance, for Wide ResNet 28-10 on CIFAR-100, while the ensemble performance are (NLL, ACC)=(0.678, 0.820) (see Table 2), the individual members obtain substantially poorer performance, as measured by the average ensemble-member metrics (NLL, ACC)=(0.904, 0.788). Training time and memory cost. Both in terms of the number of parameters and training time, hyper-batch ens is about twice as costly as batch ens. For CIFAR-100, hyper-batch ens takes 2.16 minutes/epoch and batch ens 1.10 minute/epoch. More details are available in Appendix D.6. Calibration on out of distribution data. We measure the calibrated prediction on corrupted datasets, which is a type of out-of-distribution examples. We consider the recently published dataset by [33], which consists of over 30 types of corruptions to the images of CIFAR-10. A similar benchmark can be found in [64]. On Figure 3, we find that all ensembles methods improve upon the single model. The mean accuracies are similar for all ensemble methods, whereas hyper-batch ens shows more robustness than batch ens as it typically leads to smaller worst values (see bottom whiskers in Figure 3). Plots for calibration error and NLL can be found in Appendix D.5. 6 Discussion We envision several promising directions for future research. Towards more compact parametrization. In this work, we have used the layers from [52] that lead to a 2x increase in memory compared with standard layers. In lieu of (3), low-rank parametrizations, e.g., W + ∑h j=1 ej(λ)gjh > j , would be appealing to reduce the memory footprint of selftuning networks and hyper-batch ensembles. We formally show in Appendix E that this family of parametrizations is well motivated in the case of shallow models where they enjoy good approximation guarantees. Architecture diversity. Our proposed hyperparameter ensembles provide diversity with respect to hyperparameters related to regularization and optimization. We would like to go further in ensembling very different functions in the search space, such as network width, depth [2], and the choice of residual block. Doing so connects to older work on Bayesian marginalization over structures [37, 1]. More broadly, we can wonder what other types of diversity matter to endow deep learning models with better uncertainty estimates? Broader Impact Our work belongs to a broader research effort that tries to quantify the predictive uncertainty for deep neural networks. Those models are known to generalize poorly to small changes to the data while maintaining high confidence in their predictions. Who may benefit from this research? The broader topic of our work is becoming increasingly important in a context where machine learning systems are being deployed in safety-critical fields, e.g., medical diagnosis [54, 49] and self-driving cars [48]. Those examples would benefit from the general technology we contribute to. In those cases, it is essential to be able to reliably trust the uncertainty output by the models before any decision-making process, to possibly escalate uncertain decisions to appropriate human operators. Who may be put at disadvantage from this research? We are not aware of a group of people that may be put at disadvantage as a result of this direct research. What are the consequences of failure of the system? By definition, our research could contribute to aspects of machine-learning systems used in high-risk domains (e.g., we mentioned earlier medical fields and self-driving cars) which involves complex data-driven decision-making processes. Depending on the nature of the application at hand, a failure of the system could lead to extremely negative consequences. A case in point is the recent screening system used by one third of UK government councils to allocate welfare budget. 1 Do the task/method leverage biases in the data? The method we develop in this work is domainagnostic and does not rely on specific data assumptions. Our method also does not contain components that would prevent its combination with existing fairness or privacy-preserving technologies [4]. Acknowledgments We would like to thank Nicolas Le Roux, Alexey Dosovitskiy and Josip Djolonga for insightful discussions at earlier stages of this project. Moreover, we would like to thank Sebastian Nowozin, Klaus-Robert Müller and Balaji Lakshminarayanan for helpful comments on a draft of this paper.
1. What is the main contribution of the paper? 2. How does the proposed approach generalize deep ensembles? 3. What are the strengths of the paper regarding its impact on uncertainty quantification and potential applications? 4. What are the weaknesses of the paper regarding its connection to prior works in Bayesian modeling? 5. Are there any suggestions for improving the empirical evaluation?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Post-rebuttal: ----------------------------------------------- Thank you for the clarification. After reading the other reviews and the rebuttal, I decided to increase my score. ----------------------------------------------- This paper presents a generalization of deep ensembles (Lakshminarayanan et al., NIPS 2017). In addition to ensembling neural networks based on distinct points in the parameter space, this paper proposes to also the *hyper*parameter space (which contains all possible values of neural networks hyperparameters, e.g. weight decay, dropout rate, learning rate). The authors propose a greedy algorithm for constructing an ensemble given a set of models which have different parameter and hyperparameter values. Furthermore, a lightweight version of this algorithm, based on the recently proposed batch ensembles (Wen et al., ICLR 2020), is proposed. Extensive experiments show that ensembling networks based on both parameters and hyperparameters yield a substantial improvement in uncertainty quantification compared to the baselines. Strengths I like this paper since it is an important step toward full uncertainty quantification (i.e. quantifying all sources of uncertainty) of neural networks. As more sources of uncertainty are quantified, the quality of predictive uncertainty---the quantity that matters the most in predictive systems---improves. This can have a big implication in safety-critical systems since one can trust the predictive uncertainty better. Empirical evaluation is solid but can be further improved: It is sufficiently broad but it could be more in-depth. Please find some suggestions in the bottom of this review. Weaknesses This paper has a strong connection to the hierarchical Bayesian modeling of neural networks, where a hyperprior (prior distribution over hyperparameters) is assigned to the probabilistic model. Hyper ensembles can roughly be seen as consisting of samples from the posterior of this Bayesian model. It is thus a bit disappointing that the authors did not compare, discuss, or at least mention this connection in the paper. A comparison and discussion would be very helpful to point out exactly the novelty of hyper ensembles compared to this established Bayesian modeling technique.
NIPS
Title Hyperparameter Ensembles for Robustness and Uncertainty Quantification Abstract Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, we improve upon both deep and batch ensembles. N/A 1 Introduction Neural networks are well-suited to form ensembles of models [30]. Indeed, neural networks trained from different random initialization can lead to equally wellperforming models that are nonetheless diverse in that they make complementary errors on held-out data [30]. This property is explained by the multi-modal nature of their loss landscape [24] and the randomness induced by both their initialization and the stochastic methods commonly used to train them [8, 38, 9]. Many mechanisms have been proposed to further foster diversity in ensembles of neural networks, e.g., based on cyclical learning rates [36] or Bayesian analysis [17]. In this paper, we focus on exploiting the diversity induced by combining neural networks defined by different hyperparameters. This concept is already wellestablished [13] and the auto-ML community actively applies it [21, 65, 53, 46]. We build upon this research with the following two complementary goals. First, for performance independent of computational and memory budget, we seek to improve upon deep ensembles [43], the current state-of-the-art ensembling method in terms of robustness and uncertainty quantification [64, 28]. To this end, we develop a simple stratification scheme which combines random search and the greedy selection of hyperparameters from [13] with the benefit 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. of multiple random initializations per hyperparameter like in deep ensembles. Figure 1 illustrates our algorithm for a Wide ResNet 28-10 where it leads to substantial improvements, highlighting the benefits of combining different initialization and hyperparameters. Second, we seek to improve upon batch ensembles [69], the current state-of-the-art in efficient ensembles. To this end, we propose a parameterization combining that of [69] and self-tuning networks [52], which enables both weight and hyperparameter diversity. Our approach is a drop-in replacement that outperforms batch ensembles and does not need a separate tuning of the hyperparameters. 1.1 Related work Ensembles over neural network weights. Combining the outputs of several neural networks to improve their single performance has a long history, e.g., [47, 30, 25, 41, 58, 15]. Since the quality of an ensemble hinges on the diversity of its members [30], many mechanisms were developed to generate diverse ensemble members. For instance, cyclical learning-rate schedules can explore several local minima [36, 76] where ensemble members can be snapshot. Other examples are MC dropout [23] or the random initialization itself, possibly combined with the bootstrap [45, 43]. More generally, Bayesian neural networks can be seen as ensembles with members being weighted by the (approximated) posterior distribution over the parameters [34, 51, 56, 7, 71, 72]. Hyperparameter ensembles. Hyperparameter-tuning methods [20] typically produce a pool of models from which ensembles can be constructed post hoc, e.g., [65]. This idea has been made systematic as part of auto-sklearn [21] and successfully exploited in several other contexts, e.g., [19] and specifically for neural networks [53] as well as in computer vision [60] and genetics [35]. In particular, the greedy ensemble construction from [13] (and later variations thereof [12]) was shown to work best among other algorithms, either more expensive or more prone to overfitting. To the best of our knowledge, such ensembles based on hyperparameters have not been studied in the light of predictive uncertainty. Moreover, we are not aware of existing methods to efficiently build such ensembles, similarly to what batch ensembles do for deep ensembles. Finally, recent research in Bayesian optimization has also focused on directly optimizing the performance of the ensemble while tuning the hyperparameters [46]. Hyperparameter ensembles also connect closely to probabilistic models over structures. These works often analyze Bayesian nonparametric distributions, such as over depth and width of a neural network, leveraging Markov chain Monte Carlo for inference [37, 1, 18, 42]. In this work, we examine more parametric assumptions, building on the success of variational inference and mixture distributions: for example, the validation step in hyper-batch ensemble can be viewed as a mixture variational posterior and the entropy penalty is the ELBO’s KL divergence toward a uniform prior. Concurrent to our paper, [75] construct neural network ensembles within the context of neural architecture search, showing improved robustness for predictions with distributional shift. One of their methods, NES-RS, has similarities with our hyper-deep ensembles (see Section 3), also relying on both random search and [13] to form ensembles, but do not stratify over different initializations. We vary the hyperparameters while keeping the architecture fixed while [75] study the converse. Furthermore, [75] do not explore a parameter- and computationally-efficient method (see Section 4). Efficient hyperparameter tuning & best-response function. Some hyperparameters of a neural network, e.g., its L2 regularization parameter(s), can be optimized by estimating the best-response function [26], i.e., the mapping from the hyperparameters to the parameters of the neural networks solving the problem at hand [11]. Learning this mapping is an instance of learning an hypernetwork [61, 62, 29] and falls within the scope of bilevel optimization problems [14]. Because of the daunting complexity of this mapping, [50, 52] proposed scalable local approximations of the best-response function. Similar methodology was also employed for style transfer and image compression [3, 16]. The self-tuning networks from [52] are an important building block of our approach wherein we extend their setting to the case of an ensemble over different hyperparameters. 1.2 Contributions We examine two regimes to exploit hyperparameter diversity: (a) ensemble performance independent of budget and (b) ensemble performance seeking parameter efficiency, where, respectively, deep and batch ensembles [43, 69] are state-of-the-art. We propose one ensemble method for each regime: (a) Hyper-deep ensembles. We define a greedy algorithm to form ensembles of neural networks exploiting two sources of diversity: varied hyperparameters and random initialization. By stratifying models with respect to the latter, our algorithm subsumes deep ensembles that we outperform in our experiments. Our approach is a simple, strong baseline that we hope will be used in future research. (b) Hyper-batch ensembles. We efficiently construct ensembles of neural networks defined over different hyperparameters. Both the ensemble members and their hyperparameters are learned endto-end in a single training procedure, directly maximizing the ensemble performance. Our approach outperforms batch ensembles and generalizes the layer structure of [52] and [69], while keeping their original memory compactness and efficient minibatching for parallel training and prediction. We illustrate the benefits of our two ensemble methods on image classification tasks, with multi-layer perceptron, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, in terms of both predictive performance and uncertainty. The code for generic hyper-batch ensemble layers can be found in https://github.com/google/edward2 and the code to reproduce the experiments of Section 5.2 is part of https://github.com/google/uncertainty-baselines. 2 Background We introduce notation and background required to define our approach. Consider an i.i.d. classification setting with data D = {(xn, yn)}Nn=1 where xn ∈ Rd is the feature vector corresponding to the n-th example and yn its class label. We seek to learn a classifier in the form of a neural network fθ where all its parameters (weights and bias terms) are summarized in θ ∈ Rp. In addition to its primary parameters θ, the model fθ will also depend on m hyperparameters that we refer to as λ ∈ Rm. For instance, an entry in λ could correspond to the dropout rate of a given layer in fθ. Equipped with some loss function `, e.g., the cross entropy, and some regularization term Ω(·,λ), e.g., the squared L2 norm with a strength defined by an entry of λ, we are interested in θ̂(λ) ∈ arg min θ∈Rp E(x,y)∈D [ L(x, y,θ,λ) ] with L(x, y,θ,λ) = `(fθ(x,λ), y) + Ω(θ,λ), (1) where E(x,y)∈D[·] stands for the expectation with a uniform distribution over D. As we shall see in Section 5, the loss ` = `λ can also depend on λ, for instance to control a label smoothing parameter [67]. In general, λ is chosen based on some held-out evaluation metric by grid search, random search [6] or more sophisticated hyperparameter-tuning methods [20]. 2.1 Deep ensembles and batch ensembles Deep ensembles [43] are a simple ensembling method where neural networks with different random initialization are combined. Deep ensembles lead to remarkable predictive performance and robust uncertainty estimates [64, 28]. Given some hyperparameters λ0, a deep ensemble of size K amounts to solving K times (1) with random initialization and aggregating the outputs of {fθ̂k(λ0)(·,λ0)} K k=1. Batch ensembles [69] are a state-of-the-art efficient alternative to deep ensembles, preserving their performance while reducing their computational and memory burden. To simplify the presentation, we focus on the example of a dense layer in fθ , with weight matrix W ∈ Rr×s where r and s denote the input and output dimensions of the layer respectively. A deep ensemble of size K needs to train, predict with, and store K weight matrices {Wk}Kk=1. Instead, batch ensembles consider a single matrix W ∈ Rr×s together with two sets of auxiliary vectors [r1, . . . , rK ] ∈ Rr×K and [s1, . . . , sK ] ∈ Rs×K such that the role of Wk is played by W ◦ (rks>k ) for each k ∈ {1, . . . ,K}, (2) where we denote by ◦ the element-wise product (which we will broadcast row-wise or column-wise depending on the shapes at play). Not only does (2) lead to a memory saving, but it also allows for efficient minibatching, where each datapoint may use a different ensemble member. Given a batch of inputs X ∈ Rb×r, the predictions for the k-th member equal X[W ◦ (rks>k )] = [(X ◦ r>k )W] ◦ s>k . By properly tiling the batch X, the K members can thus predict in parallel in one forward pass [69]. 2.2 Self-tuning networks Hyperparameter tuning typically involves multiple runs of the training procedure. One efficient alternative [50, 52] is to approximate the best-response function, i.e., the mapping from λ to optimal parameters θ̂(λ). The local approximation of [52] captures the changes of λ by scaling and shifting the hidden units of fθ , which requires in turn extra parameters θ′ ∈ Rp ′ , summarized in Θ = {θ,θ′}. [52] call the resulting approach self-tuning network since fΘ tunes online its own hyperparameters λ. In the sequel, λ will be continuous such as dropout rates, L2 penalties and label smoothing. Example of the dense layer. We illustrate the choice and role of θ′ in the example of a dense layer (the convolutional layer is similar to [59]; see details in [52]). The weight matrix W ∈ Rr×s and bias b ∈ Rs of a dense layer are defined as (with ∆ and δ of the same shapes as W and b respectively), W(λ) = W + ∆ ◦ e(λ)> and b(λ) = b + δ ◦ e′(λ), (3) where e(λ) ∈ Rs and e′(λ) ∈ Rs are real-valued embeddings of λ. In [52], the embedding is linear, i.e., e(λ) = Cλ and e′(λ) = C′λ. In this example, we have original parameters θ = {W,b} as well as the additional parameters θ′ = {∆, δ,C,C′}. Training objective. Since θ′ captures changes in θ induced by changes in λ, [50, 52] replace the typical objective (1), defined for a single value of λ, with an expected objective [50, 52, 16], min Θ∈Rp+p′ Eλ∼p(λ),(x,y)∈D [ L(x, y,Θ,λ) ] , (4) where p(λ) denotes some distribution over the hyperparameters λ. When p is kept fixed during the optimization of (4), the authors of [50] observed that θ̂(λ) is not well approximated and proposed instead to use a distribution pt(λ) = p(λ|ξt) varying with the iteration t. In our work we choose p(·|ξt) to be a log-uniform distribution with ξt containing the bounds of the ranges of λ (see Section 4). The key benefit from (4) is that a single (though, more costly) training gives access to a mapping λ 7→ fΘ̂(·,λ) which approximates the behavior of fΘ̂ for hyperparameters in the support of p(λ). Alternating optimization. The procedure followed by [52] consists in alternating between training and tuning steps. First, the training step performs a stochastic gradient update of Θ in (4), jointly sampling λ ∼ p(λ|ξt) and (x, y) ∈ D. Second, the tuning step makes a stochastic gradient update of ξt by minimizing some validation objective (e.g., the cross entropy): min ξt Eλ∼p(λ|ξt),(x,y)∈Dval [ `val(fΘ(x,λ), y) ] . (5) In (5), derivatives are taken through samples λ ∼ p(λ|ξt) by applying the reparametrization trick [39]. To prevent p(λ|ξt) from collapsing to a degenerate distribution, and inspired by variational inference, the authors of [52] add an entropy regularization termH[·] controlled by τ ≥ 0 so that (5) becomes min ξt Eλ∼p(λ|ξt),(x,y)∈Dval [ `val(fΘ(x,λ), y)− τH[p(λ|ξt)] ] . (6) 3 Hyper-deep ensembles Figure 2-(left) visualizes different models fθ(·,λ) according to their hyperparameters λ along the x-axis and their initialization θinit. on the y-axis. In this view, a deep ensemble corresponds to a “column” where models with different random initialization are combined together, for a fixed λ. On the other hand, a “row” corresponds to the combination of models with different hyperparameters. Such a “row” typically stems from the application of some hyperparameter-tuning techniques [20]. Fixed initialization hyper ensembles. Given the simplicity, broad applicability, and performance of the greedy algorithm from [13]—e.g., in auto-ML settings [21], we use it as our canonical procedure to generate a “row”, i.e., an ensemble of neural networks with fixed parameter initialization and various hyperparameters. We refer to it as fixed init hyper ensemble. For completeness, we recall the procedure from [13] in Appendix A (Algorithm 2, named hyper_ens). Given an input set of models (e.g., from random search), hyper_ens greedily grows an ensemble until some target size K is met by selecting the model with the best improvement of some score, e.g., the validation log-likelihood. We select the models with replacement to be able to learn weighted combinations thereof (see Section 2.1 in [13]). Note that the procedure from [13] does not require the models to have a fixed initialization: we consider here a fixed initialization to isolate the effect of just varying the hyperparameters (while deep ensembles vary only the initialization, with fixed hyperparameters). Our goal is two-fold: (a) we want to demonstrate the complementarity of random initialization and hyperparameters as sources of diversity in the ensemble, and (b) design a simple algorithmic scheme that exploits both sources of diversity while encompassing the construction of deep ensembles as a subcase. We defer to Section 5 the study of (a) and next focus on (b). Hyper-deep ensembles. We proceed in three main steps, as summarized in Algorithm 1. In lines 1-2, we first generate one “row” according to hyper_ens based on the results of random search [6] as input. We then tile and stratify that “row” by training the models for different random initialization (see lines 4-7). The resulting set of models is illustrated in Figure 2-(left). In line 10, we finally re-apply hyper_ens on that stratified set of models to extract an ensemble that can exploit the two sources of diversity. By design, a deep ensemble is one possible outcome of this procedure—one “column”—and so is fixed init hyper ensemble described in the previous paragraph—one “row”. Algorithm 1: hyper_deep_ens(K,κ) 1 M0 = {fθj (·,λj)}κj=1←− rand_search(κ); 2 E0 ←− hyper_ens(M0, K) and Estrat. = { }; 3 foreach fθ(·,λ) ∈ E0.unique() do 4 foreach k ∈ {1, . . . ,K} do 5 θ′ ←− random initialization; 6 fθk(·,λ)←− train fθ′(·,λ); 7 Estrat. = Estrat. ∪ { fθk(·,λ)}; 8 end 9 end 10 return hyper_ens(Estrat., K); In lines 1-2, running random search leads to a set of κ models (i.e.,M0). If we were to stratify all of them, we would need K seeds for each of those κ models, hence a total of O(κK) models to train. However, we first apply hyper_ens to extract K models out of the κ available ones, with K κ. The stratification then needs K seeds for each of those K models (lines 4-7), thus O(K2) models to train. We will see in Section 5 that even with standard hyperparameters, e.g., dropout or L2 parameters, Algorithm 1 can lead to substantial improvements over deep ensembles. In Appendix C.7.5, we conduct ablation studies to relate to the top-K strategy used in [60] and NES-RS from [75]. 4 Hyper-batch ensembles This section presents our efficient approach to construct ensembles over different hyperparameters. 4.1 Composing the layer structures of batch ensembles and self-tuning networks The core idea lies in the composition of the layers used by batch ensembles [69] for ensembling parameters and self-tuning networks [52] for parameterizing the layer as an explicit function of hyperparameters. The composition preserves complementary features from both approaches. We continue the example of the dense layer from Section 2.1-Section 2.2. The convolutional layer is described in Appendix B.1. Assuming an ensemble of size K, we have for k ∈ {1, . . . ,K} Wk(λk) = W ◦ (rks>k ) + [∆ ◦ (ukv>k )] ◦ e(λk)> and bk(λk) = bk + δk ◦ e′(λk), (7) where the rk’s (respectively, uk’s) in Rr and sk’s (respectively, vk’s) in Rs are vectors which diversify the shared matrix W (respectively, ∆) in Rr×s; and the bk’s in Rs and δk’s in Rs are the bias terms for each of the K ensemble members. We comment on some important properties of (7): • As noted by [69], formulation (2) includes a set of rank-1 factors which diversify individual ensemble member weights. In (7), the rank-1 factors rks>k and ukv > k capture this weight diversity for each respective term. • As noted by [52], formulation (3) captures local hyperparameter variations in the vicinity of some λ. The term [∆ ◦ (ukv>k )] ◦ e(λk)> in (7) extends this behavior to the vicinity of the K hyperparameters {λ1, . . . ,λK} indexing the K ensemble members. • Equation (7) maintains the compactness of the original layers of [52, 69] with a resulting memory footprint about twice as large as [69] and equivalent to [52] up to the rank-1 factors. • Given K hyperparameters {λ1, . . . ,λK} and a batch of inputs X ∈ Rb×r, the structure of (7) preserves the efficient minibatching of [69]. If 1b is the vector of ones in Rb, we can tile X, 1bλ>k and 1be(λk) >, enabling all K members to predict in a single forward pass. • From an implementation perspective, (7) enables direct reuse of existing code, e.g., DenseBatchEnsemble and Conv2DBatchEnsemble from [68]. The implementation of our layers can be found in https://github.com/google/edward2. 4.2 Objective function: from single model to ensemble We first need to slightly overload the notation from Section 2.2 and we write fΘ(x,λk) to denote the prediction for the input x of the k-th ensemble member indexed by λk. In Θ, we pack all the parameters of f , as those described in the example of the dense layer in Section 4.1. In particular, predicting with λk is understood as using the corresponding parameters {Wk(λk),bk(λk)} in (7). Training and validation objectives. We want the ensemble members to account for a diverse combination of hyperparameters. As a result, each ensemble member is assigned its own distribution of hyperparameters, which we write pt(λk) = p(λk|ξk,t) for k ∈ {1, . . . ,K}. Along the line of (4), we consider an expected training objective which now simultaneously operates over ΛK = {λk}Kk=1 min Θ EΛK∼qt,(x,y)∈D [ L(x, y,Θ,ΛK) ] with qt ( ΛK ) = q(ΛK |{ξk,t}Kk=1) = K∏ k=1 pt(λk) (8) and where L, compared with (1), is extended to handle the ensemble predictions L(x, y,Θ,ΛK) = ` ( {fΘ(x,λk)}Kk=1, y ) + Ω ( Θ, {λk}Kk=1 ) . For example, the loss ` can be the ensemble cross entropy or the average ensemble-member cross entropy (in our experiments, we will use the latter as recent results suggests it often generalizes better [17]). The introduction of one distribution pt per ensemble member also affects the validation step of the alternating optimization, in particular we adapt (6) to become min {ξk,t}Kk=1 EΛK∼qt,(x,y)∈Dval [ `val({fΘ(x,λk)}Kk=1, y)− τH [ qt ( ΛK )]] . (9) Note that the extensions (8)-(9) with K = 1 fall back to the standard formulation of [52]. In our experiments, we take Ω to be L2 regularizers applied to the parameters Wk(λk) and bk(λk) of each ensemble member. In Appendix B.2, we show how to efficiently vectorize the computation of Ω across the ensemble members and mini-batches of {λk}Kk=1 sampled from qt, as required by (8). In practice, we use one sample of ΛK for each data point in the batch: for MLP/LeNet (Section 5.1), we use 256, while for ResNet-20/W. ResNet-28-10 (Section 5.2), we use 512 (64 for each of 8 workers). Definition of pt. In the experiments of Section 5, we will manipulate hyperparameters λ that are positive and bounded (e.g., a dropout rate). For each ensemble member with hyperparameters λk ∈ Rm, we thus define its distribution pt(λk) = p(λk|ξk,t) to be m independent log-uniform distributions (one per dimension in λk), which is a standard choice for hyperparameter tuning, e.g., [5, 6, 53]. With this choice, ξk,t contains 2m parameters, namely the bounds of the ranges of the m distributions. Similar to [52], at prediction time, we take λk to be equal to the means λmeank of the distributions pt(λk). In Appendix B.3, we provide additional details about pt. The validation steps (6) and (9) seek to optimize the bounds of the ranges. More specifically, the loss `val favors compact ranges around a good hyperparameter value whereas the entropy term encourages wide ranges, as traded off by τ . We provide an example of the optimization trajectory of λ and its range in Figure 2-(right), where λ corresponds to the mean of the log-uniform distribution. 5 Experiments Throughout the experiments, we use both metrics that depend on the predictive uncertainty—negative log-likelihood (NLL) and expected calibration error (ECE) [55]—and metrics that do not, e.g., the classification accuracy. The supplementary material also reports Brier score [10] (for which we typically observed a strong correlation with NLL). Moreover, as diversity metric, we take the predictive disagreement of the ensemble members normalized by (1-accuracy), as used in [22]. In the tables, we write the number of ensemble members in brackets “(·)” next to the name of the methods. 5.1 Multi-layer perceptron and LeNet on Fashion MNIST & CIFAR-100 To validate our approaches and run numerous ablation studies, we first focus on small-scale models, namely MLP and LeNet [44], over CIFAR-100 [40] and Fashion MNIST [73]. For both models, we add a dropout layer [66] before their last layer. For each pair of dataset/model type, we consider two tuning settings involving the dropout rate and different L2 regularizers defined with varied granularity, e.g., layerwise. Appendix C.1 gives all the details about the training, tuning and dataset definitions. Baselines. We compare our methods (i) hyper-deep ens: hyper-deep ensemble of Section 3 and (ii) hyper-batch ens: hyper-batch ensemble of Section 4, to (a) rand search: the best single model after 50 trials of random search [6], (b) Bayes opt: the best single model after 50 trials of Bayesian optimization [63, 27], (c) deep ens: deep ensemble [43] using the best hyperparameters found by random search, (d) batch ens: batch ensemble [69], (e) STN: self-tuning networks [52], and (f) fixed init hyper ens: defined in Section 3. The supplementary material details how we tune the hyperparameters specific to batch ens, STN and hyper-batch ens (see Appendix C.2, Appendix C.3 and Appendix C.4 and further ablations about e in Appendix C.5 and τ in Appendix C.6). Note that batch ens needs the tuning of its own hyperparameters and those of the MLP/LeNet models, while STN and hyper-batch ens automatically tune the latter. We highlight below the key conclusions from Table 1 with single models and ensemble of sizes 3. The same conclusions can also be drawn for the ensemble of size 5 (see Appendix C.7.1). Ensembles benefit from both weight and hyperparameter diversity. With the pictorial view of Figure 2 in mind, fixed init hyper ens, i.e., a “row”, tends to outperform deep ens, i.e., a “column”. Moreover, those two approaches (as well as the other methods of the benchmark) are outperformed by our stratified procedure hyper-deep ens, demonstrating the benefit of combining hyperparameter and initialization diversity (see Appendix C.7.2 for the detailed assessment of the statistical significance). In Appendix C.7.3, we study more specifically the diversity and we show that hyper-deep ens has indeed more diverse predictions than deep ens. Efficient ensembles benefit from both weight and hyperparameter diversity. Among the efficient approaches (the three rightmost columns of Table 1), hyper-batch ens performs best. It improves upon both STN and batch ens, the two methods it builds upon. In line with [52], STN typically matches or improves upon rand search and Bayes opt. As explained in Section 4.1, hyper-batch ens has however twice the number of parameters of batch ens. In Appendix C.7.4, we thus compare with a “deep ensemble of two batch ensembles” (i.e., resulting in the same number of parameters but twice as many members as for hyper-batch ens). In that case, hyper-batch ens also either improves upon or matches the performance of the combination of two batch ens. 5.2 ResNet-20 and Wide ResNet-28-10 on CIFAR-10 & CIFAR-100 We evaluate our approach in a large-scale setting with ResNet-20 [31] and Wide ResNet 28-10 models [74] as they are simple architectures with competitive performance on image classification tasks. We consider six different L2 regularization hyperparameters (one for each block of the ResNet) and a label smoothing hyperparameter. We show results on CIFAR-10, CIFAR-100 and corruptions on CIFAR-10 [33, 64]. Moreover, in Appendix D.3, we provide additional out-of-distribution evaluations along the line of [32]. Further details about the experiment settings can be found in Appendix D. CIFAR-10/100. We compare hyper-deep ens with a single model (tuned as next explained) and deep ens of varying ensemble sizes. Our hyper-deep ens is constructed based on 100 trials of random search while deep ens and single take the best hyperparameter configuration found by the random search procedure. Figure 1 displays the results on CIFAR-100 along with the standard errors and shows that throughout the ensemble sizes, there is a substantial performance improvement of hyper-deep ensembles over deep ensembles. The results for CIFAR-10 are shown in Appendix D where hyper-deep ens leads to consistent but smaller improvements, e.g., in terms of NLL. We next fix the ensemble size to four and compare the performance of hyper-batch ens with the direct competing method batch ens, as well as with hyper-deep ens, deep ens and single. The results are reported in Table 2. On CIFAR-100, hyper-batch ens improves, or matches, batch ens across all metrics. For instance, in terms of NLL, it improves upon batch ens by about 7% and 2% for ResNet-20 and Wide ResNet 28-10 respectively. Moreover, the members of hyper-batch ens make more diverse predictions than those of batch ens. On CIFAR-10 hyper-batch ens also achieves a consistent improvement, though less pronounced (see Table 2). On the same Wide ResNet 28-10 benchmark, with identical training and evaluation pipelines (see https://github.com/google/uncertainty-baselines), variational inference [70] leads to (NLL, ACC, ECE)=(0.211, 0.947, 0.029) and (NLL, ACC, ECE)=(0.944, 0.778, 0.097) for CIFAR-10 and CIFAR-100 respectively, while Monte Carlo dropout [23] gets (NLL, ACC, ECE)=(0.160, 0.959, 0.024) and (NLL, ACC, ECE)=(0.830, 0.776, 0.050) for CIFAR-10 and CIFAR-100 respectively. We can finally look at how the joint training in hyper-batch ens leads to complementary ensemble members. For instance, for Wide ResNet 28-10 on CIFAR-100, while the ensemble performance are (NLL, ACC)=(0.678, 0.820) (see Table 2), the individual members obtain substantially poorer performance, as measured by the average ensemble-member metrics (NLL, ACC)=(0.904, 0.788). Training time and memory cost. Both in terms of the number of parameters and training time, hyper-batch ens is about twice as costly as batch ens. For CIFAR-100, hyper-batch ens takes 2.16 minutes/epoch and batch ens 1.10 minute/epoch. More details are available in Appendix D.6. Calibration on out of distribution data. We measure the calibrated prediction on corrupted datasets, which is a type of out-of-distribution examples. We consider the recently published dataset by [33], which consists of over 30 types of corruptions to the images of CIFAR-10. A similar benchmark can be found in [64]. On Figure 3, we find that all ensembles methods improve upon the single model. The mean accuracies are similar for all ensemble methods, whereas hyper-batch ens shows more robustness than batch ens as it typically leads to smaller worst values (see bottom whiskers in Figure 3). Plots for calibration error and NLL can be found in Appendix D.5. 6 Discussion We envision several promising directions for future research. Towards more compact parametrization. In this work, we have used the layers from [52] that lead to a 2x increase in memory compared with standard layers. In lieu of (3), low-rank parametrizations, e.g., W + ∑h j=1 ej(λ)gjh > j , would be appealing to reduce the memory footprint of selftuning networks and hyper-batch ensembles. We formally show in Appendix E that this family of parametrizations is well motivated in the case of shallow models where they enjoy good approximation guarantees. Architecture diversity. Our proposed hyperparameter ensembles provide diversity with respect to hyperparameters related to regularization and optimization. We would like to go further in ensembling very different functions in the search space, such as network width, depth [2], and the choice of residual block. Doing so connects to older work on Bayesian marginalization over structures [37, 1]. More broadly, we can wonder what other types of diversity matter to endow deep learning models with better uncertainty estimates? Broader Impact Our work belongs to a broader research effort that tries to quantify the predictive uncertainty for deep neural networks. Those models are known to generalize poorly to small changes to the data while maintaining high confidence in their predictions. Who may benefit from this research? The broader topic of our work is becoming increasingly important in a context where machine learning systems are being deployed in safety-critical fields, e.g., medical diagnosis [54, 49] and self-driving cars [48]. Those examples would benefit from the general technology we contribute to. In those cases, it is essential to be able to reliably trust the uncertainty output by the models before any decision-making process, to possibly escalate uncertain decisions to appropriate human operators. Who may be put at disadvantage from this research? We are not aware of a group of people that may be put at disadvantage as a result of this direct research. What are the consequences of failure of the system? By definition, our research could contribute to aspects of machine-learning systems used in high-risk domains (e.g., we mentioned earlier medical fields and self-driving cars) which involves complex data-driven decision-making processes. Depending on the nature of the application at hand, a failure of the system could lead to extremely negative consequences. A case in point is the recent screening system used by one third of UK government councils to allocate welfare budget. 1 Do the task/method leverage biases in the data? The method we develop in this work is domainagnostic and does not rely on specific data assumptions. Our method also does not contain components that would prevent its combination with existing fairness or privacy-preserving technologies [4]. Acknowledgments We would like to thank Nicolas Le Roux, Alexey Dosovitskiy and Josip Djolonga for insightful discussions at earlier stages of this project. Moreover, we would like to thank Sebastian Nowozin, Klaus-Robert Müller and Balaji Lakshminarayanan for helpful comments on a draft of this paper.
1. What is the focus and contribution of the paper on hyper-parameter tuning and random initialization? 2. What are the strengths of the proposed approach, particularly in terms of its ability to unify different methods and apply batch ensemble techniques? 3. What are the weaknesses of the paper, especially regarding the marginal improvements and lack of innovation in some aspects? 4. How does the reviewer assess the effectiveness of the proposed method in controlling the variance of p_t such that it can perform a good local search job around \lambda_k? 5. Do you have any concerns or suggestions regarding the empirical results and their believability?
Summary and Contributions Strengths Weaknesses
Summary and Contributions 1. This paper unifies hyper-parameter tuning and random initialization as two dimensions to encourage model diversity. When combining these two methods, the overall result is better than each method. 2. The paper further applies a recently proposed batch ensemble technique to simulate deep ensemble and extend the existing self-tuning networks to the ensemble learning scenario. 3. Empirical results are provided on benchmark datasets with different architectures. Strengths Empirical results look believable and the authors promise to release code upon acceptance. Weaknesses 1. The proposed method marginally improve over previous methods. 2. The proposed method is a combination of existing techniques. The main innovation I can see so far is the design of self-tuning networks for ensemble learning. 3. The paper claims that two sources of diversity jointly contribute to the overall ensemble model. Actually, there is a third source of diversity during training from p_t(\lambda_k) that controls the diversity of \lambda_k. Assuming p_t will not degenerate, how to effectively control the variance of p_t such that it can do a good local search job around \lambda_k? It would be great if there is a qualitative explanation that multiple ensemble members p_t(\lambda_k) for k=1,..., K work well independently and can jointly explore a wider space of lambda.
NIPS
Title Hyperparameter Ensembles for Robustness and Uncertainty Quantification Abstract Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, we improve upon both deep and batch ensembles. N/A 1 Introduction Neural networks are well-suited to form ensembles of models [30]. Indeed, neural networks trained from different random initialization can lead to equally wellperforming models that are nonetheless diverse in that they make complementary errors on held-out data [30]. This property is explained by the multi-modal nature of their loss landscape [24] and the randomness induced by both their initialization and the stochastic methods commonly used to train them [8, 38, 9]. Many mechanisms have been proposed to further foster diversity in ensembles of neural networks, e.g., based on cyclical learning rates [36] or Bayesian analysis [17]. In this paper, we focus on exploiting the diversity induced by combining neural networks defined by different hyperparameters. This concept is already wellestablished [13] and the auto-ML community actively applies it [21, 65, 53, 46]. We build upon this research with the following two complementary goals. First, for performance independent of computational and memory budget, we seek to improve upon deep ensembles [43], the current state-of-the-art ensembling method in terms of robustness and uncertainty quantification [64, 28]. To this end, we develop a simple stratification scheme which combines random search and the greedy selection of hyperparameters from [13] with the benefit 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. of multiple random initializations per hyperparameter like in deep ensembles. Figure 1 illustrates our algorithm for a Wide ResNet 28-10 where it leads to substantial improvements, highlighting the benefits of combining different initialization and hyperparameters. Second, we seek to improve upon batch ensembles [69], the current state-of-the-art in efficient ensembles. To this end, we propose a parameterization combining that of [69] and self-tuning networks [52], which enables both weight and hyperparameter diversity. Our approach is a drop-in replacement that outperforms batch ensembles and does not need a separate tuning of the hyperparameters. 1.1 Related work Ensembles over neural network weights. Combining the outputs of several neural networks to improve their single performance has a long history, e.g., [47, 30, 25, 41, 58, 15]. Since the quality of an ensemble hinges on the diversity of its members [30], many mechanisms were developed to generate diverse ensemble members. For instance, cyclical learning-rate schedules can explore several local minima [36, 76] where ensemble members can be snapshot. Other examples are MC dropout [23] or the random initialization itself, possibly combined with the bootstrap [45, 43]. More generally, Bayesian neural networks can be seen as ensembles with members being weighted by the (approximated) posterior distribution over the parameters [34, 51, 56, 7, 71, 72]. Hyperparameter ensembles. Hyperparameter-tuning methods [20] typically produce a pool of models from which ensembles can be constructed post hoc, e.g., [65]. This idea has been made systematic as part of auto-sklearn [21] and successfully exploited in several other contexts, e.g., [19] and specifically for neural networks [53] as well as in computer vision [60] and genetics [35]. In particular, the greedy ensemble construction from [13] (and later variations thereof [12]) was shown to work best among other algorithms, either more expensive or more prone to overfitting. To the best of our knowledge, such ensembles based on hyperparameters have not been studied in the light of predictive uncertainty. Moreover, we are not aware of existing methods to efficiently build such ensembles, similarly to what batch ensembles do for deep ensembles. Finally, recent research in Bayesian optimization has also focused on directly optimizing the performance of the ensemble while tuning the hyperparameters [46]. Hyperparameter ensembles also connect closely to probabilistic models over structures. These works often analyze Bayesian nonparametric distributions, such as over depth and width of a neural network, leveraging Markov chain Monte Carlo for inference [37, 1, 18, 42]. In this work, we examine more parametric assumptions, building on the success of variational inference and mixture distributions: for example, the validation step in hyper-batch ensemble can be viewed as a mixture variational posterior and the entropy penalty is the ELBO’s KL divergence toward a uniform prior. Concurrent to our paper, [75] construct neural network ensembles within the context of neural architecture search, showing improved robustness for predictions with distributional shift. One of their methods, NES-RS, has similarities with our hyper-deep ensembles (see Section 3), also relying on both random search and [13] to form ensembles, but do not stratify over different initializations. We vary the hyperparameters while keeping the architecture fixed while [75] study the converse. Furthermore, [75] do not explore a parameter- and computationally-efficient method (see Section 4). Efficient hyperparameter tuning & best-response function. Some hyperparameters of a neural network, e.g., its L2 regularization parameter(s), can be optimized by estimating the best-response function [26], i.e., the mapping from the hyperparameters to the parameters of the neural networks solving the problem at hand [11]. Learning this mapping is an instance of learning an hypernetwork [61, 62, 29] and falls within the scope of bilevel optimization problems [14]. Because of the daunting complexity of this mapping, [50, 52] proposed scalable local approximations of the best-response function. Similar methodology was also employed for style transfer and image compression [3, 16]. The self-tuning networks from [52] are an important building block of our approach wherein we extend their setting to the case of an ensemble over different hyperparameters. 1.2 Contributions We examine two regimes to exploit hyperparameter diversity: (a) ensemble performance independent of budget and (b) ensemble performance seeking parameter efficiency, where, respectively, deep and batch ensembles [43, 69] are state-of-the-art. We propose one ensemble method for each regime: (a) Hyper-deep ensembles. We define a greedy algorithm to form ensembles of neural networks exploiting two sources of diversity: varied hyperparameters and random initialization. By stratifying models with respect to the latter, our algorithm subsumes deep ensembles that we outperform in our experiments. Our approach is a simple, strong baseline that we hope will be used in future research. (b) Hyper-batch ensembles. We efficiently construct ensembles of neural networks defined over different hyperparameters. Both the ensemble members and their hyperparameters are learned endto-end in a single training procedure, directly maximizing the ensemble performance. Our approach outperforms batch ensembles and generalizes the layer structure of [52] and [69], while keeping their original memory compactness and efficient minibatching for parallel training and prediction. We illustrate the benefits of our two ensemble methods on image classification tasks, with multi-layer perceptron, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, in terms of both predictive performance and uncertainty. The code for generic hyper-batch ensemble layers can be found in https://github.com/google/edward2 and the code to reproduce the experiments of Section 5.2 is part of https://github.com/google/uncertainty-baselines. 2 Background We introduce notation and background required to define our approach. Consider an i.i.d. classification setting with data D = {(xn, yn)}Nn=1 where xn ∈ Rd is the feature vector corresponding to the n-th example and yn its class label. We seek to learn a classifier in the form of a neural network fθ where all its parameters (weights and bias terms) are summarized in θ ∈ Rp. In addition to its primary parameters θ, the model fθ will also depend on m hyperparameters that we refer to as λ ∈ Rm. For instance, an entry in λ could correspond to the dropout rate of a given layer in fθ. Equipped with some loss function `, e.g., the cross entropy, and some regularization term Ω(·,λ), e.g., the squared L2 norm with a strength defined by an entry of λ, we are interested in θ̂(λ) ∈ arg min θ∈Rp E(x,y)∈D [ L(x, y,θ,λ) ] with L(x, y,θ,λ) = `(fθ(x,λ), y) + Ω(θ,λ), (1) where E(x,y)∈D[·] stands for the expectation with a uniform distribution over D. As we shall see in Section 5, the loss ` = `λ can also depend on λ, for instance to control a label smoothing parameter [67]. In general, λ is chosen based on some held-out evaluation metric by grid search, random search [6] or more sophisticated hyperparameter-tuning methods [20]. 2.1 Deep ensembles and batch ensembles Deep ensembles [43] are a simple ensembling method where neural networks with different random initialization are combined. Deep ensembles lead to remarkable predictive performance and robust uncertainty estimates [64, 28]. Given some hyperparameters λ0, a deep ensemble of size K amounts to solving K times (1) with random initialization and aggregating the outputs of {fθ̂k(λ0)(·,λ0)} K k=1. Batch ensembles [69] are a state-of-the-art efficient alternative to deep ensembles, preserving their performance while reducing their computational and memory burden. To simplify the presentation, we focus on the example of a dense layer in fθ , with weight matrix W ∈ Rr×s where r and s denote the input and output dimensions of the layer respectively. A deep ensemble of size K needs to train, predict with, and store K weight matrices {Wk}Kk=1. Instead, batch ensembles consider a single matrix W ∈ Rr×s together with two sets of auxiliary vectors [r1, . . . , rK ] ∈ Rr×K and [s1, . . . , sK ] ∈ Rs×K such that the role of Wk is played by W ◦ (rks>k ) for each k ∈ {1, . . . ,K}, (2) where we denote by ◦ the element-wise product (which we will broadcast row-wise or column-wise depending on the shapes at play). Not only does (2) lead to a memory saving, but it also allows for efficient minibatching, where each datapoint may use a different ensemble member. Given a batch of inputs X ∈ Rb×r, the predictions for the k-th member equal X[W ◦ (rks>k )] = [(X ◦ r>k )W] ◦ s>k . By properly tiling the batch X, the K members can thus predict in parallel in one forward pass [69]. 2.2 Self-tuning networks Hyperparameter tuning typically involves multiple runs of the training procedure. One efficient alternative [50, 52] is to approximate the best-response function, i.e., the mapping from λ to optimal parameters θ̂(λ). The local approximation of [52] captures the changes of λ by scaling and shifting the hidden units of fθ , which requires in turn extra parameters θ′ ∈ Rp ′ , summarized in Θ = {θ,θ′}. [52] call the resulting approach self-tuning network since fΘ tunes online its own hyperparameters λ. In the sequel, λ will be continuous such as dropout rates, L2 penalties and label smoothing. Example of the dense layer. We illustrate the choice and role of θ′ in the example of a dense layer (the convolutional layer is similar to [59]; see details in [52]). The weight matrix W ∈ Rr×s and bias b ∈ Rs of a dense layer are defined as (with ∆ and δ of the same shapes as W and b respectively), W(λ) = W + ∆ ◦ e(λ)> and b(λ) = b + δ ◦ e′(λ), (3) where e(λ) ∈ Rs and e′(λ) ∈ Rs are real-valued embeddings of λ. In [52], the embedding is linear, i.e., e(λ) = Cλ and e′(λ) = C′λ. In this example, we have original parameters θ = {W,b} as well as the additional parameters θ′ = {∆, δ,C,C′}. Training objective. Since θ′ captures changes in θ induced by changes in λ, [50, 52] replace the typical objective (1), defined for a single value of λ, with an expected objective [50, 52, 16], min Θ∈Rp+p′ Eλ∼p(λ),(x,y)∈D [ L(x, y,Θ,λ) ] , (4) where p(λ) denotes some distribution over the hyperparameters λ. When p is kept fixed during the optimization of (4), the authors of [50] observed that θ̂(λ) is not well approximated and proposed instead to use a distribution pt(λ) = p(λ|ξt) varying with the iteration t. In our work we choose p(·|ξt) to be a log-uniform distribution with ξt containing the bounds of the ranges of λ (see Section 4). The key benefit from (4) is that a single (though, more costly) training gives access to a mapping λ 7→ fΘ̂(·,λ) which approximates the behavior of fΘ̂ for hyperparameters in the support of p(λ). Alternating optimization. The procedure followed by [52] consists in alternating between training and tuning steps. First, the training step performs a stochastic gradient update of Θ in (4), jointly sampling λ ∼ p(λ|ξt) and (x, y) ∈ D. Second, the tuning step makes a stochastic gradient update of ξt by minimizing some validation objective (e.g., the cross entropy): min ξt Eλ∼p(λ|ξt),(x,y)∈Dval [ `val(fΘ(x,λ), y) ] . (5) In (5), derivatives are taken through samples λ ∼ p(λ|ξt) by applying the reparametrization trick [39]. To prevent p(λ|ξt) from collapsing to a degenerate distribution, and inspired by variational inference, the authors of [52] add an entropy regularization termH[·] controlled by τ ≥ 0 so that (5) becomes min ξt Eλ∼p(λ|ξt),(x,y)∈Dval [ `val(fΘ(x,λ), y)− τH[p(λ|ξt)] ] . (6) 3 Hyper-deep ensembles Figure 2-(left) visualizes different models fθ(·,λ) according to their hyperparameters λ along the x-axis and their initialization θinit. on the y-axis. In this view, a deep ensemble corresponds to a “column” where models with different random initialization are combined together, for a fixed λ. On the other hand, a “row” corresponds to the combination of models with different hyperparameters. Such a “row” typically stems from the application of some hyperparameter-tuning techniques [20]. Fixed initialization hyper ensembles. Given the simplicity, broad applicability, and performance of the greedy algorithm from [13]—e.g., in auto-ML settings [21], we use it as our canonical procedure to generate a “row”, i.e., an ensemble of neural networks with fixed parameter initialization and various hyperparameters. We refer to it as fixed init hyper ensemble. For completeness, we recall the procedure from [13] in Appendix A (Algorithm 2, named hyper_ens). Given an input set of models (e.g., from random search), hyper_ens greedily grows an ensemble until some target size K is met by selecting the model with the best improvement of some score, e.g., the validation log-likelihood. We select the models with replacement to be able to learn weighted combinations thereof (see Section 2.1 in [13]). Note that the procedure from [13] does not require the models to have a fixed initialization: we consider here a fixed initialization to isolate the effect of just varying the hyperparameters (while deep ensembles vary only the initialization, with fixed hyperparameters). Our goal is two-fold: (a) we want to demonstrate the complementarity of random initialization and hyperparameters as sources of diversity in the ensemble, and (b) design a simple algorithmic scheme that exploits both sources of diversity while encompassing the construction of deep ensembles as a subcase. We defer to Section 5 the study of (a) and next focus on (b). Hyper-deep ensembles. We proceed in three main steps, as summarized in Algorithm 1. In lines 1-2, we first generate one “row” according to hyper_ens based on the results of random search [6] as input. We then tile and stratify that “row” by training the models for different random initialization (see lines 4-7). The resulting set of models is illustrated in Figure 2-(left). In line 10, we finally re-apply hyper_ens on that stratified set of models to extract an ensemble that can exploit the two sources of diversity. By design, a deep ensemble is one possible outcome of this procedure—one “column”—and so is fixed init hyper ensemble described in the previous paragraph—one “row”. Algorithm 1: hyper_deep_ens(K,κ) 1 M0 = {fθj (·,λj)}κj=1←− rand_search(κ); 2 E0 ←− hyper_ens(M0, K) and Estrat. = { }; 3 foreach fθ(·,λ) ∈ E0.unique() do 4 foreach k ∈ {1, . . . ,K} do 5 θ′ ←− random initialization; 6 fθk(·,λ)←− train fθ′(·,λ); 7 Estrat. = Estrat. ∪ { fθk(·,λ)}; 8 end 9 end 10 return hyper_ens(Estrat., K); In lines 1-2, running random search leads to a set of κ models (i.e.,M0). If we were to stratify all of them, we would need K seeds for each of those κ models, hence a total of O(κK) models to train. However, we first apply hyper_ens to extract K models out of the κ available ones, with K κ. The stratification then needs K seeds for each of those K models (lines 4-7), thus O(K2) models to train. We will see in Section 5 that even with standard hyperparameters, e.g., dropout or L2 parameters, Algorithm 1 can lead to substantial improvements over deep ensembles. In Appendix C.7.5, we conduct ablation studies to relate to the top-K strategy used in [60] and NES-RS from [75]. 4 Hyper-batch ensembles This section presents our efficient approach to construct ensembles over different hyperparameters. 4.1 Composing the layer structures of batch ensembles and self-tuning networks The core idea lies in the composition of the layers used by batch ensembles [69] for ensembling parameters and self-tuning networks [52] for parameterizing the layer as an explicit function of hyperparameters. The composition preserves complementary features from both approaches. We continue the example of the dense layer from Section 2.1-Section 2.2. The convolutional layer is described in Appendix B.1. Assuming an ensemble of size K, we have for k ∈ {1, . . . ,K} Wk(λk) = W ◦ (rks>k ) + [∆ ◦ (ukv>k )] ◦ e(λk)> and bk(λk) = bk + δk ◦ e′(λk), (7) where the rk’s (respectively, uk’s) in Rr and sk’s (respectively, vk’s) in Rs are vectors which diversify the shared matrix W (respectively, ∆) in Rr×s; and the bk’s in Rs and δk’s in Rs are the bias terms for each of the K ensemble members. We comment on some important properties of (7): • As noted by [69], formulation (2) includes a set of rank-1 factors which diversify individual ensemble member weights. In (7), the rank-1 factors rks>k and ukv > k capture this weight diversity for each respective term. • As noted by [52], formulation (3) captures local hyperparameter variations in the vicinity of some λ. The term [∆ ◦ (ukv>k )] ◦ e(λk)> in (7) extends this behavior to the vicinity of the K hyperparameters {λ1, . . . ,λK} indexing the K ensemble members. • Equation (7) maintains the compactness of the original layers of [52, 69] with a resulting memory footprint about twice as large as [69] and equivalent to [52] up to the rank-1 factors. • Given K hyperparameters {λ1, . . . ,λK} and a batch of inputs X ∈ Rb×r, the structure of (7) preserves the efficient minibatching of [69]. If 1b is the vector of ones in Rb, we can tile X, 1bλ>k and 1be(λk) >, enabling all K members to predict in a single forward pass. • From an implementation perspective, (7) enables direct reuse of existing code, e.g., DenseBatchEnsemble and Conv2DBatchEnsemble from [68]. The implementation of our layers can be found in https://github.com/google/edward2. 4.2 Objective function: from single model to ensemble We first need to slightly overload the notation from Section 2.2 and we write fΘ(x,λk) to denote the prediction for the input x of the k-th ensemble member indexed by λk. In Θ, we pack all the parameters of f , as those described in the example of the dense layer in Section 4.1. In particular, predicting with λk is understood as using the corresponding parameters {Wk(λk),bk(λk)} in (7). Training and validation objectives. We want the ensemble members to account for a diverse combination of hyperparameters. As a result, each ensemble member is assigned its own distribution of hyperparameters, which we write pt(λk) = p(λk|ξk,t) for k ∈ {1, . . . ,K}. Along the line of (4), we consider an expected training objective which now simultaneously operates over ΛK = {λk}Kk=1 min Θ EΛK∼qt,(x,y)∈D [ L(x, y,Θ,ΛK) ] with qt ( ΛK ) = q(ΛK |{ξk,t}Kk=1) = K∏ k=1 pt(λk) (8) and where L, compared with (1), is extended to handle the ensemble predictions L(x, y,Θ,ΛK) = ` ( {fΘ(x,λk)}Kk=1, y ) + Ω ( Θ, {λk}Kk=1 ) . For example, the loss ` can be the ensemble cross entropy or the average ensemble-member cross entropy (in our experiments, we will use the latter as recent results suggests it often generalizes better [17]). The introduction of one distribution pt per ensemble member also affects the validation step of the alternating optimization, in particular we adapt (6) to become min {ξk,t}Kk=1 EΛK∼qt,(x,y)∈Dval [ `val({fΘ(x,λk)}Kk=1, y)− τH [ qt ( ΛK )]] . (9) Note that the extensions (8)-(9) with K = 1 fall back to the standard formulation of [52]. In our experiments, we take Ω to be L2 regularizers applied to the parameters Wk(λk) and bk(λk) of each ensemble member. In Appendix B.2, we show how to efficiently vectorize the computation of Ω across the ensemble members and mini-batches of {λk}Kk=1 sampled from qt, as required by (8). In practice, we use one sample of ΛK for each data point in the batch: for MLP/LeNet (Section 5.1), we use 256, while for ResNet-20/W. ResNet-28-10 (Section 5.2), we use 512 (64 for each of 8 workers). Definition of pt. In the experiments of Section 5, we will manipulate hyperparameters λ that are positive and bounded (e.g., a dropout rate). For each ensemble member with hyperparameters λk ∈ Rm, we thus define its distribution pt(λk) = p(λk|ξk,t) to be m independent log-uniform distributions (one per dimension in λk), which is a standard choice for hyperparameter tuning, e.g., [5, 6, 53]. With this choice, ξk,t contains 2m parameters, namely the bounds of the ranges of the m distributions. Similar to [52], at prediction time, we take λk to be equal to the means λmeank of the distributions pt(λk). In Appendix B.3, we provide additional details about pt. The validation steps (6) and (9) seek to optimize the bounds of the ranges. More specifically, the loss `val favors compact ranges around a good hyperparameter value whereas the entropy term encourages wide ranges, as traded off by τ . We provide an example of the optimization trajectory of λ and its range in Figure 2-(right), where λ corresponds to the mean of the log-uniform distribution. 5 Experiments Throughout the experiments, we use both metrics that depend on the predictive uncertainty—negative log-likelihood (NLL) and expected calibration error (ECE) [55]—and metrics that do not, e.g., the classification accuracy. The supplementary material also reports Brier score [10] (for which we typically observed a strong correlation with NLL). Moreover, as diversity metric, we take the predictive disagreement of the ensemble members normalized by (1-accuracy), as used in [22]. In the tables, we write the number of ensemble members in brackets “(·)” next to the name of the methods. 5.1 Multi-layer perceptron and LeNet on Fashion MNIST & CIFAR-100 To validate our approaches and run numerous ablation studies, we first focus on small-scale models, namely MLP and LeNet [44], over CIFAR-100 [40] and Fashion MNIST [73]. For both models, we add a dropout layer [66] before their last layer. For each pair of dataset/model type, we consider two tuning settings involving the dropout rate and different L2 regularizers defined with varied granularity, e.g., layerwise. Appendix C.1 gives all the details about the training, tuning and dataset definitions. Baselines. We compare our methods (i) hyper-deep ens: hyper-deep ensemble of Section 3 and (ii) hyper-batch ens: hyper-batch ensemble of Section 4, to (a) rand search: the best single model after 50 trials of random search [6], (b) Bayes opt: the best single model after 50 trials of Bayesian optimization [63, 27], (c) deep ens: deep ensemble [43] using the best hyperparameters found by random search, (d) batch ens: batch ensemble [69], (e) STN: self-tuning networks [52], and (f) fixed init hyper ens: defined in Section 3. The supplementary material details how we tune the hyperparameters specific to batch ens, STN and hyper-batch ens (see Appendix C.2, Appendix C.3 and Appendix C.4 and further ablations about e in Appendix C.5 and τ in Appendix C.6). Note that batch ens needs the tuning of its own hyperparameters and those of the MLP/LeNet models, while STN and hyper-batch ens automatically tune the latter. We highlight below the key conclusions from Table 1 with single models and ensemble of sizes 3. The same conclusions can also be drawn for the ensemble of size 5 (see Appendix C.7.1). Ensembles benefit from both weight and hyperparameter diversity. With the pictorial view of Figure 2 in mind, fixed init hyper ens, i.e., a “row”, tends to outperform deep ens, i.e., a “column”. Moreover, those two approaches (as well as the other methods of the benchmark) are outperformed by our stratified procedure hyper-deep ens, demonstrating the benefit of combining hyperparameter and initialization diversity (see Appendix C.7.2 for the detailed assessment of the statistical significance). In Appendix C.7.3, we study more specifically the diversity and we show that hyper-deep ens has indeed more diverse predictions than deep ens. Efficient ensembles benefit from both weight and hyperparameter diversity. Among the efficient approaches (the three rightmost columns of Table 1), hyper-batch ens performs best. It improves upon both STN and batch ens, the two methods it builds upon. In line with [52], STN typically matches or improves upon rand search and Bayes opt. As explained in Section 4.1, hyper-batch ens has however twice the number of parameters of batch ens. In Appendix C.7.4, we thus compare with a “deep ensemble of two batch ensembles” (i.e., resulting in the same number of parameters but twice as many members as for hyper-batch ens). In that case, hyper-batch ens also either improves upon or matches the performance of the combination of two batch ens. 5.2 ResNet-20 and Wide ResNet-28-10 on CIFAR-10 & CIFAR-100 We evaluate our approach in a large-scale setting with ResNet-20 [31] and Wide ResNet 28-10 models [74] as they are simple architectures with competitive performance on image classification tasks. We consider six different L2 regularization hyperparameters (one for each block of the ResNet) and a label smoothing hyperparameter. We show results on CIFAR-10, CIFAR-100 and corruptions on CIFAR-10 [33, 64]. Moreover, in Appendix D.3, we provide additional out-of-distribution evaluations along the line of [32]. Further details about the experiment settings can be found in Appendix D. CIFAR-10/100. We compare hyper-deep ens with a single model (tuned as next explained) and deep ens of varying ensemble sizes. Our hyper-deep ens is constructed based on 100 trials of random search while deep ens and single take the best hyperparameter configuration found by the random search procedure. Figure 1 displays the results on CIFAR-100 along with the standard errors and shows that throughout the ensemble sizes, there is a substantial performance improvement of hyper-deep ensembles over deep ensembles. The results for CIFAR-10 are shown in Appendix D where hyper-deep ens leads to consistent but smaller improvements, e.g., in terms of NLL. We next fix the ensemble size to four and compare the performance of hyper-batch ens with the direct competing method batch ens, as well as with hyper-deep ens, deep ens and single. The results are reported in Table 2. On CIFAR-100, hyper-batch ens improves, or matches, batch ens across all metrics. For instance, in terms of NLL, it improves upon batch ens by about 7% and 2% for ResNet-20 and Wide ResNet 28-10 respectively. Moreover, the members of hyper-batch ens make more diverse predictions than those of batch ens. On CIFAR-10 hyper-batch ens also achieves a consistent improvement, though less pronounced (see Table 2). On the same Wide ResNet 28-10 benchmark, with identical training and evaluation pipelines (see https://github.com/google/uncertainty-baselines), variational inference [70] leads to (NLL, ACC, ECE)=(0.211, 0.947, 0.029) and (NLL, ACC, ECE)=(0.944, 0.778, 0.097) for CIFAR-10 and CIFAR-100 respectively, while Monte Carlo dropout [23] gets (NLL, ACC, ECE)=(0.160, 0.959, 0.024) and (NLL, ACC, ECE)=(0.830, 0.776, 0.050) for CIFAR-10 and CIFAR-100 respectively. We can finally look at how the joint training in hyper-batch ens leads to complementary ensemble members. For instance, for Wide ResNet 28-10 on CIFAR-100, while the ensemble performance are (NLL, ACC)=(0.678, 0.820) (see Table 2), the individual members obtain substantially poorer performance, as measured by the average ensemble-member metrics (NLL, ACC)=(0.904, 0.788). Training time and memory cost. Both in terms of the number of parameters and training time, hyper-batch ens is about twice as costly as batch ens. For CIFAR-100, hyper-batch ens takes 2.16 minutes/epoch and batch ens 1.10 minute/epoch. More details are available in Appendix D.6. Calibration on out of distribution data. We measure the calibrated prediction on corrupted datasets, which is a type of out-of-distribution examples. We consider the recently published dataset by [33], which consists of over 30 types of corruptions to the images of CIFAR-10. A similar benchmark can be found in [64]. On Figure 3, we find that all ensembles methods improve upon the single model. The mean accuracies are similar for all ensemble methods, whereas hyper-batch ens shows more robustness than batch ens as it typically leads to smaller worst values (see bottom whiskers in Figure 3). Plots for calibration error and NLL can be found in Appendix D.5. 6 Discussion We envision several promising directions for future research. Towards more compact parametrization. In this work, we have used the layers from [52] that lead to a 2x increase in memory compared with standard layers. In lieu of (3), low-rank parametrizations, e.g., W + ∑h j=1 ej(λ)gjh > j , would be appealing to reduce the memory footprint of selftuning networks and hyper-batch ensembles. We formally show in Appendix E that this family of parametrizations is well motivated in the case of shallow models where they enjoy good approximation guarantees. Architecture diversity. Our proposed hyperparameter ensembles provide diversity with respect to hyperparameters related to regularization and optimization. We would like to go further in ensembling very different functions in the search space, such as network width, depth [2], and the choice of residual block. Doing so connects to older work on Bayesian marginalization over structures [37, 1]. More broadly, we can wonder what other types of diversity matter to endow deep learning models with better uncertainty estimates? Broader Impact Our work belongs to a broader research effort that tries to quantify the predictive uncertainty for deep neural networks. Those models are known to generalize poorly to small changes to the data while maintaining high confidence in their predictions. Who may benefit from this research? The broader topic of our work is becoming increasingly important in a context where machine learning systems are being deployed in safety-critical fields, e.g., medical diagnosis [54, 49] and self-driving cars [48]. Those examples would benefit from the general technology we contribute to. In those cases, it is essential to be able to reliably trust the uncertainty output by the models before any decision-making process, to possibly escalate uncertain decisions to appropriate human operators. Who may be put at disadvantage from this research? We are not aware of a group of people that may be put at disadvantage as a result of this direct research. What are the consequences of failure of the system? By definition, our research could contribute to aspects of machine-learning systems used in high-risk domains (e.g., we mentioned earlier medical fields and self-driving cars) which involves complex data-driven decision-making processes. Depending on the nature of the application at hand, a failure of the system could lead to extremely negative consequences. A case in point is the recent screening system used by one third of UK government councils to allocate welfare budget. 1 Do the task/method leverage biases in the data? The method we develop in this work is domainagnostic and does not rely on specific data assumptions. Our method also does not contain components that would prevent its combination with existing fairness or privacy-preserving technologies [4]. Acknowledgments We would like to thank Nicolas Le Roux, Alexey Dosovitskiy and Josip Djolonga for insightful discussions at earlier stages of this project. Moreover, we would like to thank Sebastian Nowozin, Klaus-Robert Müller and Balaji Lakshminarayanan for helpful comments on a draft of this paper.
1. What is the focus and contribution of the paper on ensemble learning? 2. What are the strengths of the proposed approach, particularly in terms of its motivation and writing style? 3. What are the weaknesses of the paper, especially regarding its novelty and simplicity? 4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposed to do ensembles over both weights and hyperparameters to improve the performance. Specifically, the paper proposed stratified hyper ensembles that involves a random search over different hyperparameters and stratified across multiple random initializations. The authors also proposed batch hyper ensembles, which is a parameter efficient version of the model. The proposed model is tested on image classification tasks and achieves favorable performance. Strengths 1. This paper is well-motivated and well-written. It is easy to read and follow, which sufficient details on the model. 2. The performance of the model outperformance the baselines consistently. Weaknesses On the novelty. The proposed is simple and straightforward. Although the empirical performance is good, the novelty is incremental.
NIPS
Title Learning to See by Looking at Noise Abstract Current vision systems are trained on huge datasets, and these datasets come with costs: curation is expensive, they inherit human biases, and there are concerns over privacy and usage rights. To counter these costs, interest has surged in learning from cheaper data sources, such as unlabeled images. In this paper, we go a step further and ask if we can do away with real image datasets entirely, by learning from procedural noise processes. We investigate a suite of image generation models that produce images from simple random processes. These are then used as training data for a visual representation learner with a contrastive loss. In particular, we study statistical image models, randomly initialized deep generative models, and procedural graphics models. Our findings show that it is important for the noise to capture certain structural properties of real data but that good performance can be achieved even with processes that are far from realistic. We also find that diversity is a key property for learning good representations. 1 Introduction The importance of data in modern computer vision is hard to overstate. Time and again we have seen that better models are empowered by bigger data. The ImageNet dataset [1], with its 1.4 million labeled images, is widely thought to have spurred the era of deep learning, and since then the scale of vision datasets has been increasing at a rapid pace; current models are trained on up to one billion images [2]. In this paper, we question the necessity of such massive training sets of real images. Instead, we investigate a suite of procedural noise models that generate images from simple random processes. These are then used as training data for a visual representation learner. We identify two key properties that make for good synthetic data for training vision systems: 1) naturalism, 2) diversity. Interestingly, the most naturalistic data is not always the best, since naturalism can come at the cost of diversity. The fact that naturalistic data help may not be surprising, and it suggests that indeed, large-scale real data has value. However, we find that what is crucial is not that the data be real but that it be naturalistic, i.e. it must capture certain structural properties of real data. Many of these properties can be captured in simple noise models (Fig. 1). The implications of our work are severalfold. First, our results call into question the true complexity of the problem of vision – if very short programs can generate and train a high-performing vision system, then vision may be simpler than we thought, and might not require huge data-driven systems to achieve adequate performance. Second, our methods open the door to training vision systems without reliance on datasets. The value in this is that datasets are encumbered by numerous costs: ∗Equal contribution 35th Conference on Neural Information Processing Systems (NeurIPS 2021). they may be expensive, biased, private, or simply intractable to collect. We do not argue for removing datasets from computer vision entirely (as real data might be required for evaluation), but rather reconsidering what can be done in their absence. 2 Related work 2.1 A short history of image generative models Out of the R3n2 dimensional space spanned by 3× n2 color images, natural images occupy a small part of that space, the rest is mostly filled by noise. During the last decades, researchers have studied the space of natural images to build models capable of compressing, denoising and generating images. The result of this research is a sequence of generative image models of increasing complexity narrowing down the space occupied by natural images within R3n2 . One surprising finding is that natural images follow a power law on the magnitude of their Fourier transform [4, 5]. This is the basis of Wiener image denoising [6] and scale-invariant models of natural images [7, 8]. Dead leaves model [8, 9] was an attempt at generating images that explained the power law found in natural images and inspired the Retinex algorithm [10]. The multiscale and self-similar nature of natural images inspired the use of fractals [11, 12, 13] as image models. Coding research for TV [5] and image modeling [6, 14, 15, 16] showed another remarkable property of natural images: the output values of zero mean wavelets to natural images are sparse and follow a generalized Laplacian distribution [6]. Color and intensity distributions in natural images have also been studied and found to follow rules that deviate from random noise [10, 17]. Research in texture synthesis showed how these statistical image models produced more realistic-looking textures [18, 19]. Those required fitting the image model parameters to specific images to sample more "like it". Recently, GANs [20] have shown remarkable image synthesis results [21]. Although GANs need real images to learn the network parameters, we show in this paper that they introduce a structural prior useful to encode image properties without requiring any training. 2.2 Training without real data and training with synthetic data Through the above progression, generative models have become increasingly complex, with more parameters and more training data needed to fit these parameters. The same has happened with vision systems in general: state-of-the-art systems like BiT [22], CLIP [23], and SEER [2] obtain their best results on 300 million, 400 million, and 1 billion images respectively. These papers further show that such large data is critical to getting the best performance. While this may be true, other work has shown that much smaller data is sufficient to already get decent performance. A single image can be enough to train, from scratch, a compelling generative model [24, 25] or visual representation [26], and, even with no training data at all, deep architectures already encode useful image priors that can be exploited for low-level vision tasks [27] or for measuring perceptual similarity [28]. Our results, using an untrained StyleGANv2 [29] to generate training data, further affirm the utility of the structural priors in neural net architectures. An alternative to training with real data is to train on synthetic data. This approach has been widely used in low-level tasks like depth, stereo, or optical flow estimation [30, 31, 32, 33], where 3D rendering engines can provide densely annotated data to learn from. Interestingly, for this class of tasks diversity is more important than realism [34], making procedurally generated scenes an effective alternative to renderings designed by professional 3D artists [35, 36]. 2Answers: 1,3,4,5,6,8,14 are from ImageNet images. Recent work has also investigated using deep generative models as a source of synthetic data to train classifiers [37, 38] and visual representations [39], or to generate synthetic annotated data for other downstream tasks [40, 41, 42]. However, these generative models are still fit to real image datasets and produce realistic-looking images as samples. In this paper we push even further away from realism, generating synthetic data from simple noise processes. The closest prior work in this direction is the pioneering work of [43], which used automatically generated fractals to pre-train neural networks that converge faster than their randomly initialized counterparts. While they demonstrated that fractals can be effective for pre-training, there is still a large gap compared to pre-training on real data. We explore a much broader range of noise processes, including many classic models from the image coding and texture synthesis literature. The use of randomized training data has also been explored under the heading of domain randomization [44], where 3D synthetic data is rendered under a variety of lighting conditions to transfer to real environments where the lighting may be unknown. Our approach can be viewed as an extreme form of domain randomization that does away with the simulation engine entirely: make the training data so diverse that a natural image will just look like a sample from the noise process. There is some evidence that biology takes a similar approach during the prenatal development of the vision system. “Retinal waves" – spontaneous, semi-random activations of the retina – are thought to entrain edge detectors and other simple structures in the developing mammalian brain [45]. 3 A progression of image generative models Here we provide the details for the image models we will use in this paper. We test a suite of generative models of the form gθ : z→ x, where z are stochastic latent variables and x is an image. We will treat image generation as a hierarchical process in which first the parameters of a model, θ, are sampled, and then the image is sampled given these parameters and stochastic noise. The parameters θ define properties of the distribution from which we will sample images, for example, the mean color of the image. The sampling process is as follows: θ ∼ p(θ), z ∼ p(z), and x = gθ(z), which corresponds to sampling images from the distribution p(x, θ) = p(x|θ)p(θ). The parameters θ that define the model may be fit to real data or not. We will explore the case where the parameters are not fit to real data but instead sampled from simple prior distributions. Next, we describe the generative image models that we will evaluate in this paper (Fig. 2). 3.1 Procedural image models The first class of models belong to the family of procedural image models. Procedural models are capable of generating very realistic-looking images in specific image domains. We include in this set also fractals, although they could make a class on their own. Fractals: Fractals have been shown to capture geometric properties of elements found in nature [46]. Consequently, image models consisting of renderings of human-designed shapes with fractal structure [43] are likely to reproduce patterns found in natural images. CG: Simulators and game engines rely on a mixture of human-driven design and procedural methods to generate environments simulating real-world properties such as illumination, 3D, and semantics. Here we include three CG models popular in computer vision with available datasets: CLEVR [47], DMLab [48] and MineRL [49]. 3.2 Dead leaves image models The second family of models is Dead leaves, one of the simplest image models. We consider simple shapes (leaves) like circles, triangles and squares, which are positioned uniformly at random in the image canvas until it is covered. To produce each shape, we circumscribe it in a circle with a radius following an exponential distribution of parameter λ. This procedure has been shown to produce images that have similar statistics to natural images, such as having a 1/|f |α power spectrum [8] and non-gaussian distribution of derivatives and wavelet coefficients [50]. In this study we will consider four dead leaves models: Dead leaves - Squares: Only uses squares axis-aligned. Dead leaves - Oriented: Squares are randomly rotated. Dead leaves - Shapes: Leaves can be circles, triangles and rectangles. Dead leaves - Textured: uses square leaves filled with a texture sampled from the statistical image models described in the next section. 3.3 Statistical image models The third family of models is statistical image models with increasing complexity. Several generative models can be composed by using different combinations of properties. Spectrum: The magnitude of the Fourier transform of many natural images follows a power law, 1/|f |α, where α is a constant close to 1 [4]. In this generative model, we will sample random noise images constrained to have FT magnitude following 1/(|fx|a + |fy|b) with a and b being two random numbers uniformly sampled in the range [0.5, 3.5]. This model introduces a bias towards horizontal and vertical image orientations typical of natural images [51]. To generate color images we first sample three random orthogonal color directions and then generate power-law-noise on each channel independently. Samples from this model are shown in Fig. 2(i). Wavelet-marginal model (WMM): Following [52], we generate textures by modeling their histograms of wavelet coefficients. To produce a texture, we create marginal histograms for the coefficients ci at N scales (i ∈ {0...N −1}) and 4 orientations following a Generalized normal distribution centered at zero, thus p(ci) ∝ exp((−|ci|/αi)βi). Each scale i represents the image down-scaled by a factor of 2i, and the parameters αi and βi for each scale are αi = 42 i and βi ∼ 0.4 + U(0, 0.4). In practice we use N = 3 and N = 4 for generating 128 × 128 and 256 × 256 resolution images respectively. Once we have sampled a marginal distribution of wavelet coefficients for each of the three channels, we do histogram matching iteratively starting from a Gaussian noise image following [53]. Fig. 2(j) shows samples from this model. Color histograms: Here we take a generative model that follows the color distribution of the dead-leaves model. First we sample a number of regions N ∼ 3 + bU(0, 20)c, their relative sizes S ∼ 0.001 + U(0, 1) and color at uniform. This results in a color distribution different from uniform. Combining all these different models allows capturing color distributions, spectral components, and wavelet distributions that mimic those typical from natural images. Fig. 2(k) shows the result of sampling from a model that enforces random white noise to have the power-law spectrum and the color histogram according to this model. Fig. 2(l) shows samples from a model incorporating all of those properties (spectrum, color and WMM). Those models produce intriguing images but fail to capture the full richness of natural images as shown in Fig. 2(i-l). 3.4 Generative adversarial networks The fourth family of models is based on the architecture of GANs. Commonly, the parameters of GANs are trained to generate realistic samples of a given training distribution. Here, we do not use adversarial training or any training data. Instead, we explore different types of initializations, study the usefulness of the GAN architecture as a prior for image generation, and show that effective data generators can be formed by sampling the model parameters from simple prior distributions. We use an untrained StyleGANv2 [29], and modify its initialization procedure to obtain images with different characteristics. This results in four classes of StyleGAN initializations: StyleGAN-Random is the default initialization. Fig. 2(m) shows samples from this model. They lack high-frequency image content since the noise maps are not applied at initialization. StyleGAN-High-freq. In this model, we increase high-frequency image content by sampling the noise maps as 1/fα noise with α ∼ U (0.5, 2), which models the statistics of natural images [4]. Additionally, the convolutional filters on all layers are randomly sampled from a bank of 3×3 wavelet filters, and each sampled wavelet is multiplied by a random amplitude ∼ N (0, 1). Note that using Wavelets as spatial filters is a common practice when hand-designing networks [54, 55] and seems to well capture the underlying general structure of visual data. The samples in Fig. 2(n) show that this model generates high-frequency structures which are fairly uniformly distributed across the image. StyleGAN-Sparse. Natural images exhibit a high degree of sparsity. In this model, we increase the sparsity of the images through two modifications. First, we modulate the 1/fα noise maps using a Laplacian envelope. We sample a 4× 4 grid of i.i.d. Laplacian noise, resize it to the desired noise map resolution using bicubic upsampling and multiply this envelope with the original sampled 1/fα noise. Second, at each convolution, we add a random bias ∼ U (−0.2, 0.2), which, in conjunction with the nonlinearities, further increases sparsity. Fig. 2(o) shows that the images created by this model indeed appear sparser. Yet, they are still lacking discernible image structures. StyleGAN-Oriented. Oriented structures are a crucial component of natural images. We found that an effective way to introduce such structures to the previous models is to tie the wavelets, i.e. to use the same wavelet for all output channels. Under tied wavelets, the standard convolution becomes yk = ∑ l [ak,l (xl ? fl)] + bk, where yk denotes output channel k, xl denotes input channel l, bk is a bias term, ak,l ∼ N (0, 1) is a random amplitude multiplier and the wavelet fl depends only on the input channel, but is shared across all output channels. As can be seen in Fig. 2(p), this creates visible, oriented structures in the output images. 3.5 Feature visualizations The final class of models we study is feature visualizations [56]. CNN’s can be used to produce novel images by optimizing the output of single or multiple units with respect to the input image, which is initialized with noise. Although these methods are commonly used to interpret and visualize the internal representation of a network, here we can use it as an image generative process. Following the procedure in [57], we obtain feature visualizations by optimizing the value of a single or a combination of two units of a neural network. We select those units from the layer that is typically used as feature extractor (i.e. the output of the penultimate linear layer), as we have empirically found this to yield more image-like visualizations compared to shallower layers. We create two datasets using this technique. Feature vis. - Random: ResNet50 with the default random initialization, shown in Fig. 2(q) and, 2) Feature vis. - Dead leaves: ResNet50 trained with dead leaves with diverse shapes, using MoCo v2 [3] and 1.3M sampled images. Samples are shown in Fig. 2(r). 4 Experiments To study the proposed image generation processes, we train an AlexNet-based encoder using the Alignment and Uniformity loss proposed in [58], which is a form of contrastive loss theoretically equivalent to the popular InfoNCE loss [59]. We generate 105k samples using the proposed image models at 128× 128 resolution, which are then downsampled to 96× 96 and cropped at random to 64 × 64 before being fed to the encoder. After unsupervised training, we evaluate linear training performance (without finetuning) on the representation right before the projection layer, following standard practice [58, 59]. We fix a common set of hyperparameters for all the methods under test to the values found to perform well by the authors of [58]. Further details of the training are provided in the Sup.Mat. We evaluate performance using Imagenet-100 [60] and the Visual Task Adaptation Benchmark [61]. VTAB consists of 19 classification tasks which are grouped into three categories: a) Natural, consisting of images of the world taken with consumer cameras b) Specialized, consisting in images of specialized domains, such as medical or aerial photography and c) Structured, where the classification tasks require understanding specific properties like shapes or distances. For each of the datasets in VTAB, we fix the number of training and validation samples to 20k at random for the datasets where there are more samples available. As an upper-bound for the maximum expected performance with synthetic images, we consider the same training procedure but using the following real datasets: 1) Places365 [62] consisting of a wide set of classes, but a different domain 2) STL-10 [63], consisting of only 10 classes of natural images and 3) Imagenet1k [1], a superset of Imagenet100. As baselines we use mean image colors, raw pixels and features obtained by an untrained Alexnet (denoted CNN - Random). 4.1 Image model comparison Figures 3 and 4 show the performance for the proposed fully generative methods from noise on Imagenet100 and VTAB (Tables can be found in the Sup.Mat.). The results on both datasets show an increased performance for Natural dataset (Imagenet100 and the Natural tasks in VTAB) that match the qualitative complexity and diversity of samples as seen in Fig. 2. On the other hand, Structured and Specialized tasks do not benefit as much from natural data (as seen in the middle and right-most plots in Fig. 4), and our models perform similarly for the tasks under test in this setting. 4.2 Large-scale experiments Finally, we test one of the best-performing methods of each type on a large-scale experiment using a Resnet50 encoder instead of AlexNet. We generate 1.3M samples of the datasets at 256 × 256 resolution, and train using the procedure described in MoCo v2 [3] with the default hyperparameters for Imagenet1k (details of the training can be found in the Sup.Mat.). The results in Table 1 show that the relative performance for the experiments with the Alexnet-based encoder is approximately preserved (except for dead leaves, which underperforms FractalDB-1k). Despite using the MoCo v2 hyperparameters found to be good for Imagenet1k (which may not be optimal for our datasets) and not using any real data, our best performing model achieves 38.12% top-1 accuracy on linear classification on top of the learned features. Furthermore, we achieve 40.06% top-1 accuracy by combining four of our datasets: Dead leaves - Shapes, Feature Visualizations - Dead leaves, Statistical (Spectrum + Color + WMM) and StyleGAN - Oriented. Additionally, our image models allow training on arbitrary amounts of samples that can be generated on the fly. For StyleGAN - Oriented we found that training with the procedure described above but continuously sampling instead of using the fixed 1.3M images yields an improved top-1 accuracy of 38.94%. 5 What properties make for good generated data? As we have seen, representations learned from noise-like images can be surprisingly powerful. Why is this the case? Intuitively, one might think that the features learned from these datasets themselves resemble noise. Interestingly, this is not the case. Fig. 5 shows the diversity of feature activations learned with the different datasets, extracted using the same procedure as described in Section 3.5. Though some datasets fail to capture certain properties (e.g. the Dead leaves - Squares dataset only reacts to axis-aligned images), feature visualizations qualitatively show that our datasets contain sufficient structure to learn a variety of complex features. Yet, not all of our datasets reach the same performance, which raises the question of what makes a generated dataset good. Which properties are strongly correlated with performance? In the following, we investigate the statistical properties of individual images (5.1) and datasets as a whole (5.2) 5.1 Properties of individual images Across our datasets, the image appearance varies significantly (see Fig. 2): They contain different structures and shapes, different color profiles, different levels of “realism”, and different effects such as occlusions. How much of the difference in performance can be attributed to such low-level, per-image appearance characteristics? Color profile. The first and most basic question is how important the color characteristics of our datasets are, since even in such a basic dimension as color, the datasets differ greatly (see Fig. 2). To test the impact of color characteristics on performance, we measure color similarity between the test dataset (here, ImageNet-100) and each of our pretraining datasets. First, we extract L*a*b values from 50K images from all datasets and ImageNet-1003.The color distribution of each dataset can then be modeled as a three-dimensional Gaussian, and the color difference between two datasets can be computed as the symmetric KL divergence between those distributions. Fig. 6(a) shows that the color similarity of most of our datasets to natural images is fairly low; at the same time, we see a clear negative correlation between performance and color distance (r = −0.57). Image spectrum. It is well known that the spectrum of natural images can be modeled as a heavytailed function A/|f |α, with A a scaling factor and typically α ∈ [0.5, 2.0] [51]. How well do our datasets resemble these statistics, and how much does this impact performance? Fig. 6(b) shows that the statistics of most of our datasets fall within this range, except for Stylegan-default. However, while an exponent α ∈ [1.4, 1.7] seems to benefit performance, the correlation is weak. Image coherence. Contrastive Learning utilizes different views as positive pairs during training, which are commonly generated using random augmentations of the same input image. For this to work, the images need a degree of global coherence, i.e. two views of the same image should be more 3In this and the next section, all measures were computed on 50K random images from the respective datasets, and, where applicable, compared against the reference statistics of ImageNet-100. To avoid distortions, correlations are always computed without taking the datasets ImageNet-100 and ImageNet1k into account. similar than two views from different images. To test the amount of coherence in our datasets and how this impacts downstream accuracy, we first compute the average perceptual variation within a dataset as 1N ∑ LPIPS (f (In) , g (In)), where f, g are two random crops of the same image and LPIPS is the perceptual distance from [28]. Fig. 6(c) shows the results. Interestingly, there seems to be a sweet spot of perceptual variation around 0.37; for datasets with lower variation, perceptual variation is strongly correlated with accuracy (r = 0.75). Going beyond this sweet spot decreases accuracy, since now two augmentations of the same image are too dissimilar to provide good information. Finding a similar effect, [64] showed that there is a sweet spot for augmentation strength in contrastive learning. 5.2 Properties of the datasets Beyond the properties of individual images, it is important to ask which properties of a dataset as a whole explain its level of performance. The first obvious predictor for good performance on a given test task is the distance of the training data distribution to the test data. We can quantify this using the Fréchet Inception Distance [65], which measures the distance between two sets of images by fitting Gaussians to the distribution of Inception features and measuring the difference between those Gaussians. As shown in Fig. 7(a), FID is strongly negatively correlated (r = −.85) with accuracy. An ideal pretraining dataset would therefore resemble the test dataset as closely as possible, achieving a low FID as well as high precision and recall to the test dataset [66]. Interestingly, the trend we observe in our data points is in a different direction: Precision is negatively correlated with accuracy (r = −0.67), and the data shows a strong negative correlation between precision and recall (r = −0.88). This can be explained by the fact that our methods are not capable of perfectly reproducing the entirety of ImageNet, leaving two possible scenarios: Either a dataset is concentrated in a small part of the image manifold (images from CLEVR and DMLAB look naturalistic, yet are severely restricted in their diversity compared to natural images), or the dataset overshoots the manifold of test images, containing as much diversity as possible, in the limit encompassing the test dataset as a subset. Should a dataset be as realistic (i.e. maximize precision), or as diverse as possible? An extreme version of the first case would be a training dataset that is concentrated around a few points of the test dataset, which would achieve high precision but low recall, and generally have low diversity. Given a distribution of features, we can measure the diversity of the dataset as |Σ|, i.e. the determinant of the covariance matrix in this feature space. Here, we use the same inception features as for the FID (see above). Fig. 7(b) shows precision vs. log-volume for all datasets; color indicates the performance on ImageNet-100. As can be seen, datasets with a high precision tend to not be particularly diverse and are negatively correlated with log-volume (r = −0.84). As volume increases, precision decreases, yet performance benefits. Considering Recall, the picture changes. As shown in Fig. 7(c), higher recall is (after a certain point) positively correlated with both log-volume (r = .79) and accuracy (r = .83). Interestingly, this does not mean that precision is irrelevant – when controlling for recall, precision is again positively correlated with accuracy, albeit more weakly (r = .22). The reason for this behavior is that in case of our datasets, precision and recall are negatively correlated, and thus present a trade-off. If such a trade-off is necessary when designing datasets for pretraining, our results indicate that maximizing recall might be more important than maximizing precision. Interestingly, despite a completely different setting, the same effect was observed in [34]. 6 Conclusion Can we learn visual representations without using real images? The datasets presented in this paper are composed of images with different types of structured noise. We have shown that, when designed using results from past research on natural image statistics, these datasets can successfully train visual representations. We hope that this paper will motivate the study of new generative models capable of producing structured noise achieving even higher performance when used in a diverse set of visual tasks. Would it be possible to match the performance obtained with ImageNet pretraining? Maybe in the absence of a large training set specific to a particular task, the best pre-training might not be using a standard real dataset such as ImageNet. Although it might be tempting to believe that the datasets presented in this paper might reduce the impact of dataset bias, bias might still exist (although they might not come from social biases, they might still impact differently subsets of the test data) and it will be important to characterize it for each particular application. Acknowledgments: Manel Baradad was supported by the LaCaixa Fellowship and Jonas Wulff was supported by a grant from Intel Corp. This research was partially conducted using computation resources from the Satori cluster donated by IBM to MIT.
1. What is the focus of the paper regarding image feature extractors? 2. What are the strengths of the proposed approach, particularly in evaluating various synthetic images? 3. What are the weaknesses of the paper, especially regarding the training objective and observations? 4. How can the authors improve their approach by generating better synthetic images? 5. Can the authors provide more conclusive evidence for their findings?
Summary Of The Paper Review
Summary Of The Paper This paper provides a comprehensive study of using not naturally looking, synthetic images to train image feature extractors. Most of the synthetic images can be generated easily either in closed form or using a randomly initialized model. The obtained feature extractors result in significantly better models on natural images than randomly initialized counterparts after fine-tuning. Various types of synthetic images are evaluated in this paper to reveal the properties of images that can be used to train better feature extractors. In general, I feel this paper is studying a very interesting problem and a solution to the problem can be used in many practical scenarios where training data is difficult to collect or even not available, but the paper might need more technical contributions to be accepted. A potential way for improvement is to utilize the observations to generate better synthetic images from some random process that can improve the accuracy. Strengths: Evaluates a wide range of synthetic image types. Comparing the statistical properties of synthetic and natural images, in an effort to uncover the important factors for good synthetic images. Suggestions and Questions: I feel the paper spends too many pages (3 pages) enumerating different image generation methods. These are important details, but maybe they should not take up 30%+ of the paper. The objective for training the models are ignored (I did not find it in the supplementary neither), though references to the paper of these methods are given. I feel it is important to clearly state the training objective in the main paper, and some details of image generation can be moved to the appendix. I feel the observations in Sec. 5.1 are not conclusive. The accuracy do not seem to correlate well with the statistics considered. Maybe a more convincing way to verify the conclusions is to generate images that produce the stats that will likely produce highest accuracy according to the observations, and verify if it is the case. The correlations seem much stronger in Sec. 5.2, but the results are not surprising. For example, it is quite intuitive that if the training images have small FID to test images, thus more similar to test images, then the test accuracy will be higher. Summary after discussions In general, I feel the findings in this paper is interesting and could potentially invoke investigations into the necessary conditions for the success of contrastive unsupervised learning for images, and would like to raise my score. However, I still encourage the authors to brand this paper as a scientific discovery rather than something that will have immediate impact on applications (addressing the privacy and bias concerns etc.). From the new results of mixing with real data, it seems adding images from random processes does not improve the results of only using real images. As to 3-Real unlabeled data, It also remains unclear whether pretraining with real images generalizes better than random images in the decentralized setting; it seems pretraining with real data will transfer better from the results of 4-Mixing with real data. Review Yes
NIPS
Title Learning to See by Looking at Noise Abstract Current vision systems are trained on huge datasets, and these datasets come with costs: curation is expensive, they inherit human biases, and there are concerns over privacy and usage rights. To counter these costs, interest has surged in learning from cheaper data sources, such as unlabeled images. In this paper, we go a step further and ask if we can do away with real image datasets entirely, by learning from procedural noise processes. We investigate a suite of image generation models that produce images from simple random processes. These are then used as training data for a visual representation learner with a contrastive loss. In particular, we study statistical image models, randomly initialized deep generative models, and procedural graphics models. Our findings show that it is important for the noise to capture certain structural properties of real data but that good performance can be achieved even with processes that are far from realistic. We also find that diversity is a key property for learning good representations. 1 Introduction The importance of data in modern computer vision is hard to overstate. Time and again we have seen that better models are empowered by bigger data. The ImageNet dataset [1], with its 1.4 million labeled images, is widely thought to have spurred the era of deep learning, and since then the scale of vision datasets has been increasing at a rapid pace; current models are trained on up to one billion images [2]. In this paper, we question the necessity of such massive training sets of real images. Instead, we investigate a suite of procedural noise models that generate images from simple random processes. These are then used as training data for a visual representation learner. We identify two key properties that make for good synthetic data for training vision systems: 1) naturalism, 2) diversity. Interestingly, the most naturalistic data is not always the best, since naturalism can come at the cost of diversity. The fact that naturalistic data help may not be surprising, and it suggests that indeed, large-scale real data has value. However, we find that what is crucial is not that the data be real but that it be naturalistic, i.e. it must capture certain structural properties of real data. Many of these properties can be captured in simple noise models (Fig. 1). The implications of our work are severalfold. First, our results call into question the true complexity of the problem of vision – if very short programs can generate and train a high-performing vision system, then vision may be simpler than we thought, and might not require huge data-driven systems to achieve adequate performance. Second, our methods open the door to training vision systems without reliance on datasets. The value in this is that datasets are encumbered by numerous costs: ∗Equal contribution 35th Conference on Neural Information Processing Systems (NeurIPS 2021). they may be expensive, biased, private, or simply intractable to collect. We do not argue for removing datasets from computer vision entirely (as real data might be required for evaluation), but rather reconsidering what can be done in their absence. 2 Related work 2.1 A short history of image generative models Out of the R3n2 dimensional space spanned by 3× n2 color images, natural images occupy a small part of that space, the rest is mostly filled by noise. During the last decades, researchers have studied the space of natural images to build models capable of compressing, denoising and generating images. The result of this research is a sequence of generative image models of increasing complexity narrowing down the space occupied by natural images within R3n2 . One surprising finding is that natural images follow a power law on the magnitude of their Fourier transform [4, 5]. This is the basis of Wiener image denoising [6] and scale-invariant models of natural images [7, 8]. Dead leaves model [8, 9] was an attempt at generating images that explained the power law found in natural images and inspired the Retinex algorithm [10]. The multiscale and self-similar nature of natural images inspired the use of fractals [11, 12, 13] as image models. Coding research for TV [5] and image modeling [6, 14, 15, 16] showed another remarkable property of natural images: the output values of zero mean wavelets to natural images are sparse and follow a generalized Laplacian distribution [6]. Color and intensity distributions in natural images have also been studied and found to follow rules that deviate from random noise [10, 17]. Research in texture synthesis showed how these statistical image models produced more realistic-looking textures [18, 19]. Those required fitting the image model parameters to specific images to sample more "like it". Recently, GANs [20] have shown remarkable image synthesis results [21]. Although GANs need real images to learn the network parameters, we show in this paper that they introduce a structural prior useful to encode image properties without requiring any training. 2.2 Training without real data and training with synthetic data Through the above progression, generative models have become increasingly complex, with more parameters and more training data needed to fit these parameters. The same has happened with vision systems in general: state-of-the-art systems like BiT [22], CLIP [23], and SEER [2] obtain their best results on 300 million, 400 million, and 1 billion images respectively. These papers further show that such large data is critical to getting the best performance. While this may be true, other work has shown that much smaller data is sufficient to already get decent performance. A single image can be enough to train, from scratch, a compelling generative model [24, 25] or visual representation [26], and, even with no training data at all, deep architectures already encode useful image priors that can be exploited for low-level vision tasks [27] or for measuring perceptual similarity [28]. Our results, using an untrained StyleGANv2 [29] to generate training data, further affirm the utility of the structural priors in neural net architectures. An alternative to training with real data is to train on synthetic data. This approach has been widely used in low-level tasks like depth, stereo, or optical flow estimation [30, 31, 32, 33], where 3D rendering engines can provide densely annotated data to learn from. Interestingly, for this class of tasks diversity is more important than realism [34], making procedurally generated scenes an effective alternative to renderings designed by professional 3D artists [35, 36]. 2Answers: 1,3,4,5,6,8,14 are from ImageNet images. Recent work has also investigated using deep generative models as a source of synthetic data to train classifiers [37, 38] and visual representations [39], or to generate synthetic annotated data for other downstream tasks [40, 41, 42]. However, these generative models are still fit to real image datasets and produce realistic-looking images as samples. In this paper we push even further away from realism, generating synthetic data from simple noise processes. The closest prior work in this direction is the pioneering work of [43], which used automatically generated fractals to pre-train neural networks that converge faster than their randomly initialized counterparts. While they demonstrated that fractals can be effective for pre-training, there is still a large gap compared to pre-training on real data. We explore a much broader range of noise processes, including many classic models from the image coding and texture synthesis literature. The use of randomized training data has also been explored under the heading of domain randomization [44], where 3D synthetic data is rendered under a variety of lighting conditions to transfer to real environments where the lighting may be unknown. Our approach can be viewed as an extreme form of domain randomization that does away with the simulation engine entirely: make the training data so diverse that a natural image will just look like a sample from the noise process. There is some evidence that biology takes a similar approach during the prenatal development of the vision system. “Retinal waves" – spontaneous, semi-random activations of the retina – are thought to entrain edge detectors and other simple structures in the developing mammalian brain [45]. 3 A progression of image generative models Here we provide the details for the image models we will use in this paper. We test a suite of generative models of the form gθ : z→ x, where z are stochastic latent variables and x is an image. We will treat image generation as a hierarchical process in which first the parameters of a model, θ, are sampled, and then the image is sampled given these parameters and stochastic noise. The parameters θ define properties of the distribution from which we will sample images, for example, the mean color of the image. The sampling process is as follows: θ ∼ p(θ), z ∼ p(z), and x = gθ(z), which corresponds to sampling images from the distribution p(x, θ) = p(x|θ)p(θ). The parameters θ that define the model may be fit to real data or not. We will explore the case where the parameters are not fit to real data but instead sampled from simple prior distributions. Next, we describe the generative image models that we will evaluate in this paper (Fig. 2). 3.1 Procedural image models The first class of models belong to the family of procedural image models. Procedural models are capable of generating very realistic-looking images in specific image domains. We include in this set also fractals, although they could make a class on their own. Fractals: Fractals have been shown to capture geometric properties of elements found in nature [46]. Consequently, image models consisting of renderings of human-designed shapes with fractal structure [43] are likely to reproduce patterns found in natural images. CG: Simulators and game engines rely on a mixture of human-driven design and procedural methods to generate environments simulating real-world properties such as illumination, 3D, and semantics. Here we include three CG models popular in computer vision with available datasets: CLEVR [47], DMLab [48] and MineRL [49]. 3.2 Dead leaves image models The second family of models is Dead leaves, one of the simplest image models. We consider simple shapes (leaves) like circles, triangles and squares, which are positioned uniformly at random in the image canvas until it is covered. To produce each shape, we circumscribe it in a circle with a radius following an exponential distribution of parameter λ. This procedure has been shown to produce images that have similar statistics to natural images, such as having a 1/|f |α power spectrum [8] and non-gaussian distribution of derivatives and wavelet coefficients [50]. In this study we will consider four dead leaves models: Dead leaves - Squares: Only uses squares axis-aligned. Dead leaves - Oriented: Squares are randomly rotated. Dead leaves - Shapes: Leaves can be circles, triangles and rectangles. Dead leaves - Textured: uses square leaves filled with a texture sampled from the statistical image models described in the next section. 3.3 Statistical image models The third family of models is statistical image models with increasing complexity. Several generative models can be composed by using different combinations of properties. Spectrum: The magnitude of the Fourier transform of many natural images follows a power law, 1/|f |α, where α is a constant close to 1 [4]. In this generative model, we will sample random noise images constrained to have FT magnitude following 1/(|fx|a + |fy|b) with a and b being two random numbers uniformly sampled in the range [0.5, 3.5]. This model introduces a bias towards horizontal and vertical image orientations typical of natural images [51]. To generate color images we first sample three random orthogonal color directions and then generate power-law-noise on each channel independently. Samples from this model are shown in Fig. 2(i). Wavelet-marginal model (WMM): Following [52], we generate textures by modeling their histograms of wavelet coefficients. To produce a texture, we create marginal histograms for the coefficients ci at N scales (i ∈ {0...N −1}) and 4 orientations following a Generalized normal distribution centered at zero, thus p(ci) ∝ exp((−|ci|/αi)βi). Each scale i represents the image down-scaled by a factor of 2i, and the parameters αi and βi for each scale are αi = 42 i and βi ∼ 0.4 + U(0, 0.4). In practice we use N = 3 and N = 4 for generating 128 × 128 and 256 × 256 resolution images respectively. Once we have sampled a marginal distribution of wavelet coefficients for each of the three channels, we do histogram matching iteratively starting from a Gaussian noise image following [53]. Fig. 2(j) shows samples from this model. Color histograms: Here we take a generative model that follows the color distribution of the dead-leaves model. First we sample a number of regions N ∼ 3 + bU(0, 20)c, their relative sizes S ∼ 0.001 + U(0, 1) and color at uniform. This results in a color distribution different from uniform. Combining all these different models allows capturing color distributions, spectral components, and wavelet distributions that mimic those typical from natural images. Fig. 2(k) shows the result of sampling from a model that enforces random white noise to have the power-law spectrum and the color histogram according to this model. Fig. 2(l) shows samples from a model incorporating all of those properties (spectrum, color and WMM). Those models produce intriguing images but fail to capture the full richness of natural images as shown in Fig. 2(i-l). 3.4 Generative adversarial networks The fourth family of models is based on the architecture of GANs. Commonly, the parameters of GANs are trained to generate realistic samples of a given training distribution. Here, we do not use adversarial training or any training data. Instead, we explore different types of initializations, study the usefulness of the GAN architecture as a prior for image generation, and show that effective data generators can be formed by sampling the model parameters from simple prior distributions. We use an untrained StyleGANv2 [29], and modify its initialization procedure to obtain images with different characteristics. This results in four classes of StyleGAN initializations: StyleGAN-Random is the default initialization. Fig. 2(m) shows samples from this model. They lack high-frequency image content since the noise maps are not applied at initialization. StyleGAN-High-freq. In this model, we increase high-frequency image content by sampling the noise maps as 1/fα noise with α ∼ U (0.5, 2), which models the statistics of natural images [4]. Additionally, the convolutional filters on all layers are randomly sampled from a bank of 3×3 wavelet filters, and each sampled wavelet is multiplied by a random amplitude ∼ N (0, 1). Note that using Wavelets as spatial filters is a common practice when hand-designing networks [54, 55] and seems to well capture the underlying general structure of visual data. The samples in Fig. 2(n) show that this model generates high-frequency structures which are fairly uniformly distributed across the image. StyleGAN-Sparse. Natural images exhibit a high degree of sparsity. In this model, we increase the sparsity of the images through two modifications. First, we modulate the 1/fα noise maps using a Laplacian envelope. We sample a 4× 4 grid of i.i.d. Laplacian noise, resize it to the desired noise map resolution using bicubic upsampling and multiply this envelope with the original sampled 1/fα noise. Second, at each convolution, we add a random bias ∼ U (−0.2, 0.2), which, in conjunction with the nonlinearities, further increases sparsity. Fig. 2(o) shows that the images created by this model indeed appear sparser. Yet, they are still lacking discernible image structures. StyleGAN-Oriented. Oriented structures are a crucial component of natural images. We found that an effective way to introduce such structures to the previous models is to tie the wavelets, i.e. to use the same wavelet for all output channels. Under tied wavelets, the standard convolution becomes yk = ∑ l [ak,l (xl ? fl)] + bk, where yk denotes output channel k, xl denotes input channel l, bk is a bias term, ak,l ∼ N (0, 1) is a random amplitude multiplier and the wavelet fl depends only on the input channel, but is shared across all output channels. As can be seen in Fig. 2(p), this creates visible, oriented structures in the output images. 3.5 Feature visualizations The final class of models we study is feature visualizations [56]. CNN’s can be used to produce novel images by optimizing the output of single or multiple units with respect to the input image, which is initialized with noise. Although these methods are commonly used to interpret and visualize the internal representation of a network, here we can use it as an image generative process. Following the procedure in [57], we obtain feature visualizations by optimizing the value of a single or a combination of two units of a neural network. We select those units from the layer that is typically used as feature extractor (i.e. the output of the penultimate linear layer), as we have empirically found this to yield more image-like visualizations compared to shallower layers. We create two datasets using this technique. Feature vis. - Random: ResNet50 with the default random initialization, shown in Fig. 2(q) and, 2) Feature vis. - Dead leaves: ResNet50 trained with dead leaves with diverse shapes, using MoCo v2 [3] and 1.3M sampled images. Samples are shown in Fig. 2(r). 4 Experiments To study the proposed image generation processes, we train an AlexNet-based encoder using the Alignment and Uniformity loss proposed in [58], which is a form of contrastive loss theoretically equivalent to the popular InfoNCE loss [59]. We generate 105k samples using the proposed image models at 128× 128 resolution, which are then downsampled to 96× 96 and cropped at random to 64 × 64 before being fed to the encoder. After unsupervised training, we evaluate linear training performance (without finetuning) on the representation right before the projection layer, following standard practice [58, 59]. We fix a common set of hyperparameters for all the methods under test to the values found to perform well by the authors of [58]. Further details of the training are provided in the Sup.Mat. We evaluate performance using Imagenet-100 [60] and the Visual Task Adaptation Benchmark [61]. VTAB consists of 19 classification tasks which are grouped into three categories: a) Natural, consisting of images of the world taken with consumer cameras b) Specialized, consisting in images of specialized domains, such as medical or aerial photography and c) Structured, where the classification tasks require understanding specific properties like shapes or distances. For each of the datasets in VTAB, we fix the number of training and validation samples to 20k at random for the datasets where there are more samples available. As an upper-bound for the maximum expected performance with synthetic images, we consider the same training procedure but using the following real datasets: 1) Places365 [62] consisting of a wide set of classes, but a different domain 2) STL-10 [63], consisting of only 10 classes of natural images and 3) Imagenet1k [1], a superset of Imagenet100. As baselines we use mean image colors, raw pixels and features obtained by an untrained Alexnet (denoted CNN - Random). 4.1 Image model comparison Figures 3 and 4 show the performance for the proposed fully generative methods from noise on Imagenet100 and VTAB (Tables can be found in the Sup.Mat.). The results on both datasets show an increased performance for Natural dataset (Imagenet100 and the Natural tasks in VTAB) that match the qualitative complexity and diversity of samples as seen in Fig. 2. On the other hand, Structured and Specialized tasks do not benefit as much from natural data (as seen in the middle and right-most plots in Fig. 4), and our models perform similarly for the tasks under test in this setting. 4.2 Large-scale experiments Finally, we test one of the best-performing methods of each type on a large-scale experiment using a Resnet50 encoder instead of AlexNet. We generate 1.3M samples of the datasets at 256 × 256 resolution, and train using the procedure described in MoCo v2 [3] with the default hyperparameters for Imagenet1k (details of the training can be found in the Sup.Mat.). The results in Table 1 show that the relative performance for the experiments with the Alexnet-based encoder is approximately preserved (except for dead leaves, which underperforms FractalDB-1k). Despite using the MoCo v2 hyperparameters found to be good for Imagenet1k (which may not be optimal for our datasets) and not using any real data, our best performing model achieves 38.12% top-1 accuracy on linear classification on top of the learned features. Furthermore, we achieve 40.06% top-1 accuracy by combining four of our datasets: Dead leaves - Shapes, Feature Visualizations - Dead leaves, Statistical (Spectrum + Color + WMM) and StyleGAN - Oriented. Additionally, our image models allow training on arbitrary amounts of samples that can be generated on the fly. For StyleGAN - Oriented we found that training with the procedure described above but continuously sampling instead of using the fixed 1.3M images yields an improved top-1 accuracy of 38.94%. 5 What properties make for good generated data? As we have seen, representations learned from noise-like images can be surprisingly powerful. Why is this the case? Intuitively, one might think that the features learned from these datasets themselves resemble noise. Interestingly, this is not the case. Fig. 5 shows the diversity of feature activations learned with the different datasets, extracted using the same procedure as described in Section 3.5. Though some datasets fail to capture certain properties (e.g. the Dead leaves - Squares dataset only reacts to axis-aligned images), feature visualizations qualitatively show that our datasets contain sufficient structure to learn a variety of complex features. Yet, not all of our datasets reach the same performance, which raises the question of what makes a generated dataset good. Which properties are strongly correlated with performance? In the following, we investigate the statistical properties of individual images (5.1) and datasets as a whole (5.2) 5.1 Properties of individual images Across our datasets, the image appearance varies significantly (see Fig. 2): They contain different structures and shapes, different color profiles, different levels of “realism”, and different effects such as occlusions. How much of the difference in performance can be attributed to such low-level, per-image appearance characteristics? Color profile. The first and most basic question is how important the color characteristics of our datasets are, since even in such a basic dimension as color, the datasets differ greatly (see Fig. 2). To test the impact of color characteristics on performance, we measure color similarity between the test dataset (here, ImageNet-100) and each of our pretraining datasets. First, we extract L*a*b values from 50K images from all datasets and ImageNet-1003.The color distribution of each dataset can then be modeled as a three-dimensional Gaussian, and the color difference between two datasets can be computed as the symmetric KL divergence between those distributions. Fig. 6(a) shows that the color similarity of most of our datasets to natural images is fairly low; at the same time, we see a clear negative correlation between performance and color distance (r = −0.57). Image spectrum. It is well known that the spectrum of natural images can be modeled as a heavytailed function A/|f |α, with A a scaling factor and typically α ∈ [0.5, 2.0] [51]. How well do our datasets resemble these statistics, and how much does this impact performance? Fig. 6(b) shows that the statistics of most of our datasets fall within this range, except for Stylegan-default. However, while an exponent α ∈ [1.4, 1.7] seems to benefit performance, the correlation is weak. Image coherence. Contrastive Learning utilizes different views as positive pairs during training, which are commonly generated using random augmentations of the same input image. For this to work, the images need a degree of global coherence, i.e. two views of the same image should be more 3In this and the next section, all measures were computed on 50K random images from the respective datasets, and, where applicable, compared against the reference statistics of ImageNet-100. To avoid distortions, correlations are always computed without taking the datasets ImageNet-100 and ImageNet1k into account. similar than two views from different images. To test the amount of coherence in our datasets and how this impacts downstream accuracy, we first compute the average perceptual variation within a dataset as 1N ∑ LPIPS (f (In) , g (In)), where f, g are two random crops of the same image and LPIPS is the perceptual distance from [28]. Fig. 6(c) shows the results. Interestingly, there seems to be a sweet spot of perceptual variation around 0.37; for datasets with lower variation, perceptual variation is strongly correlated with accuracy (r = 0.75). Going beyond this sweet spot decreases accuracy, since now two augmentations of the same image are too dissimilar to provide good information. Finding a similar effect, [64] showed that there is a sweet spot for augmentation strength in contrastive learning. 5.2 Properties of the datasets Beyond the properties of individual images, it is important to ask which properties of a dataset as a whole explain its level of performance. The first obvious predictor for good performance on a given test task is the distance of the training data distribution to the test data. We can quantify this using the Fréchet Inception Distance [65], which measures the distance between two sets of images by fitting Gaussians to the distribution of Inception features and measuring the difference between those Gaussians. As shown in Fig. 7(a), FID is strongly negatively correlated (r = −.85) with accuracy. An ideal pretraining dataset would therefore resemble the test dataset as closely as possible, achieving a low FID as well as high precision and recall to the test dataset [66]. Interestingly, the trend we observe in our data points is in a different direction: Precision is negatively correlated with accuracy (r = −0.67), and the data shows a strong negative correlation between precision and recall (r = −0.88). This can be explained by the fact that our methods are not capable of perfectly reproducing the entirety of ImageNet, leaving two possible scenarios: Either a dataset is concentrated in a small part of the image manifold (images from CLEVR and DMLAB look naturalistic, yet are severely restricted in their diversity compared to natural images), or the dataset overshoots the manifold of test images, containing as much diversity as possible, in the limit encompassing the test dataset as a subset. Should a dataset be as realistic (i.e. maximize precision), or as diverse as possible? An extreme version of the first case would be a training dataset that is concentrated around a few points of the test dataset, which would achieve high precision but low recall, and generally have low diversity. Given a distribution of features, we can measure the diversity of the dataset as |Σ|, i.e. the determinant of the covariance matrix in this feature space. Here, we use the same inception features as for the FID (see above). Fig. 7(b) shows precision vs. log-volume for all datasets; color indicates the performance on ImageNet-100. As can be seen, datasets with a high precision tend to not be particularly diverse and are negatively correlated with log-volume (r = −0.84). As volume increases, precision decreases, yet performance benefits. Considering Recall, the picture changes. As shown in Fig. 7(c), higher recall is (after a certain point) positively correlated with both log-volume (r = .79) and accuracy (r = .83). Interestingly, this does not mean that precision is irrelevant – when controlling for recall, precision is again positively correlated with accuracy, albeit more weakly (r = .22). The reason for this behavior is that in case of our datasets, precision and recall are negatively correlated, and thus present a trade-off. If such a trade-off is necessary when designing datasets for pretraining, our results indicate that maximizing recall might be more important than maximizing precision. Interestingly, despite a completely different setting, the same effect was observed in [34]. 6 Conclusion Can we learn visual representations without using real images? The datasets presented in this paper are composed of images with different types of structured noise. We have shown that, when designed using results from past research on natural image statistics, these datasets can successfully train visual representations. We hope that this paper will motivate the study of new generative models capable of producing structured noise achieving even higher performance when used in a diverse set of visual tasks. Would it be possible to match the performance obtained with ImageNet pretraining? Maybe in the absence of a large training set specific to a particular task, the best pre-training might not be using a standard real dataset such as ImageNet. Although it might be tempting to believe that the datasets presented in this paper might reduce the impact of dataset bias, bias might still exist (although they might not come from social biases, they might still impact differently subsets of the test data) and it will be important to characterize it for each particular application. Acknowledgments: Manel Baradad was supported by the LaCaixa Fellowship and Jonas Wulff was supported by a grant from Intel Corp. This research was partially conducted using computation resources from the Satori cluster donated by IBM to MIT.
1. What is the main contribution of the paper regarding visual representation learning? 2. What are the strengths of the proposed approach, particularly in drawing insights from natural image statistics? 3. What are the weaknesses of the paper, especially regarding the choice of baselines and training regimes? 4. How does the reviewer assess the clarity, quality, significance, and originality of the paper's content? 5. What are some suggestions provided by the reviewer for improving the paper, such as discussing the impact of scaling up the training set size or considering tailoring the generation processes for specific tasks?
Summary Of The Paper Review
Summary Of The Paper The paper proposes to learn visual representations from procedurally generated images, drawing insights from natural image statistics. Non-trivial test accuracy was demonstrated on standard image classification datasets. The paper also considers what constitutes a good dataset for training wrt the statistics of individual images as well as the whole dataset highlighting precision-recall trade-offs when promoting naturalism vs diversity. Review Originality: good Quality: good, could use more experiments Clarity: good Significance: good Assessment: The approach is bold and follows nicely from classical and recent results on natural image statistics. The development and presentation are of the highest quality. Good submission overall. Requests and comments: A justification for focusing on contrastive learning would be nice. It's not clear whether the conclusions generalize to other architectures and training regimes. Also, a more elaborate justification of the chosen baselines is needed. Unless I'm missing something, 105K images were used for training and 50K ImageNet images were chosen randomly for evaluation. It would be nice to discuss the impact of gradually scaling up the size of the training set size, on both the test accuracy as well as the image/dataset statistics. I wonder if the proposed generative procedures can be intuitively scheduled into a more effective curriculum, rather than used all at once. The conclusions include an intriguing remark that synthetic data might prove better than pretraining with, e.g., ImageNet. It would be nice to consider tailoring the generation processes to push the performance on specific tasks, e.g., the structural categories in VTAB. It's natural to ask how many (fewer) real images would be needed to close the performance gap to an actual baseline, like a meta-learning twist? By analogy to how mammalian vision is thought to be pretrained by retinal waves, it's likely that mammals still need even a few real examples and continue to get better, e.g., learning to read and even typoglycemia. How does the precision-recall trade-off manifest, and how to think of naturalism vs diversity, in this (meta learning) paradigm?
NIPS
Title Learning to See by Looking at Noise Abstract Current vision systems are trained on huge datasets, and these datasets come with costs: curation is expensive, they inherit human biases, and there are concerns over privacy and usage rights. To counter these costs, interest has surged in learning from cheaper data sources, such as unlabeled images. In this paper, we go a step further and ask if we can do away with real image datasets entirely, by learning from procedural noise processes. We investigate a suite of image generation models that produce images from simple random processes. These are then used as training data for a visual representation learner with a contrastive loss. In particular, we study statistical image models, randomly initialized deep generative models, and procedural graphics models. Our findings show that it is important for the noise to capture certain structural properties of real data but that good performance can be achieved even with processes that are far from realistic. We also find that diversity is a key property for learning good representations. 1 Introduction The importance of data in modern computer vision is hard to overstate. Time and again we have seen that better models are empowered by bigger data. The ImageNet dataset [1], with its 1.4 million labeled images, is widely thought to have spurred the era of deep learning, and since then the scale of vision datasets has been increasing at a rapid pace; current models are trained on up to one billion images [2]. In this paper, we question the necessity of such massive training sets of real images. Instead, we investigate a suite of procedural noise models that generate images from simple random processes. These are then used as training data for a visual representation learner. We identify two key properties that make for good synthetic data for training vision systems: 1) naturalism, 2) diversity. Interestingly, the most naturalistic data is not always the best, since naturalism can come at the cost of diversity. The fact that naturalistic data help may not be surprising, and it suggests that indeed, large-scale real data has value. However, we find that what is crucial is not that the data be real but that it be naturalistic, i.e. it must capture certain structural properties of real data. Many of these properties can be captured in simple noise models (Fig. 1). The implications of our work are severalfold. First, our results call into question the true complexity of the problem of vision – if very short programs can generate and train a high-performing vision system, then vision may be simpler than we thought, and might not require huge data-driven systems to achieve adequate performance. Second, our methods open the door to training vision systems without reliance on datasets. The value in this is that datasets are encumbered by numerous costs: ∗Equal contribution 35th Conference on Neural Information Processing Systems (NeurIPS 2021). they may be expensive, biased, private, or simply intractable to collect. We do not argue for removing datasets from computer vision entirely (as real data might be required for evaluation), but rather reconsidering what can be done in their absence. 2 Related work 2.1 A short history of image generative models Out of the R3n2 dimensional space spanned by 3× n2 color images, natural images occupy a small part of that space, the rest is mostly filled by noise. During the last decades, researchers have studied the space of natural images to build models capable of compressing, denoising and generating images. The result of this research is a sequence of generative image models of increasing complexity narrowing down the space occupied by natural images within R3n2 . One surprising finding is that natural images follow a power law on the magnitude of their Fourier transform [4, 5]. This is the basis of Wiener image denoising [6] and scale-invariant models of natural images [7, 8]. Dead leaves model [8, 9] was an attempt at generating images that explained the power law found in natural images and inspired the Retinex algorithm [10]. The multiscale and self-similar nature of natural images inspired the use of fractals [11, 12, 13] as image models. Coding research for TV [5] and image modeling [6, 14, 15, 16] showed another remarkable property of natural images: the output values of zero mean wavelets to natural images are sparse and follow a generalized Laplacian distribution [6]. Color and intensity distributions in natural images have also been studied and found to follow rules that deviate from random noise [10, 17]. Research in texture synthesis showed how these statistical image models produced more realistic-looking textures [18, 19]. Those required fitting the image model parameters to specific images to sample more "like it". Recently, GANs [20] have shown remarkable image synthesis results [21]. Although GANs need real images to learn the network parameters, we show in this paper that they introduce a structural prior useful to encode image properties without requiring any training. 2.2 Training without real data and training with synthetic data Through the above progression, generative models have become increasingly complex, with more parameters and more training data needed to fit these parameters. The same has happened with vision systems in general: state-of-the-art systems like BiT [22], CLIP [23], and SEER [2] obtain their best results on 300 million, 400 million, and 1 billion images respectively. These papers further show that such large data is critical to getting the best performance. While this may be true, other work has shown that much smaller data is sufficient to already get decent performance. A single image can be enough to train, from scratch, a compelling generative model [24, 25] or visual representation [26], and, even with no training data at all, deep architectures already encode useful image priors that can be exploited for low-level vision tasks [27] or for measuring perceptual similarity [28]. Our results, using an untrained StyleGANv2 [29] to generate training data, further affirm the utility of the structural priors in neural net architectures. An alternative to training with real data is to train on synthetic data. This approach has been widely used in low-level tasks like depth, stereo, or optical flow estimation [30, 31, 32, 33], where 3D rendering engines can provide densely annotated data to learn from. Interestingly, for this class of tasks diversity is more important than realism [34], making procedurally generated scenes an effective alternative to renderings designed by professional 3D artists [35, 36]. 2Answers: 1,3,4,5,6,8,14 are from ImageNet images. Recent work has also investigated using deep generative models as a source of synthetic data to train classifiers [37, 38] and visual representations [39], or to generate synthetic annotated data for other downstream tasks [40, 41, 42]. However, these generative models are still fit to real image datasets and produce realistic-looking images as samples. In this paper we push even further away from realism, generating synthetic data from simple noise processes. The closest prior work in this direction is the pioneering work of [43], which used automatically generated fractals to pre-train neural networks that converge faster than their randomly initialized counterparts. While they demonstrated that fractals can be effective for pre-training, there is still a large gap compared to pre-training on real data. We explore a much broader range of noise processes, including many classic models from the image coding and texture synthesis literature. The use of randomized training data has also been explored under the heading of domain randomization [44], where 3D synthetic data is rendered under a variety of lighting conditions to transfer to real environments where the lighting may be unknown. Our approach can be viewed as an extreme form of domain randomization that does away with the simulation engine entirely: make the training data so diverse that a natural image will just look like a sample from the noise process. There is some evidence that biology takes a similar approach during the prenatal development of the vision system. “Retinal waves" – spontaneous, semi-random activations of the retina – are thought to entrain edge detectors and other simple structures in the developing mammalian brain [45]. 3 A progression of image generative models Here we provide the details for the image models we will use in this paper. We test a suite of generative models of the form gθ : z→ x, where z are stochastic latent variables and x is an image. We will treat image generation as a hierarchical process in which first the parameters of a model, θ, are sampled, and then the image is sampled given these parameters and stochastic noise. The parameters θ define properties of the distribution from which we will sample images, for example, the mean color of the image. The sampling process is as follows: θ ∼ p(θ), z ∼ p(z), and x = gθ(z), which corresponds to sampling images from the distribution p(x, θ) = p(x|θ)p(θ). The parameters θ that define the model may be fit to real data or not. We will explore the case where the parameters are not fit to real data but instead sampled from simple prior distributions. Next, we describe the generative image models that we will evaluate in this paper (Fig. 2). 3.1 Procedural image models The first class of models belong to the family of procedural image models. Procedural models are capable of generating very realistic-looking images in specific image domains. We include in this set also fractals, although they could make a class on their own. Fractals: Fractals have been shown to capture geometric properties of elements found in nature [46]. Consequently, image models consisting of renderings of human-designed shapes with fractal structure [43] are likely to reproduce patterns found in natural images. CG: Simulators and game engines rely on a mixture of human-driven design and procedural methods to generate environments simulating real-world properties such as illumination, 3D, and semantics. Here we include three CG models popular in computer vision with available datasets: CLEVR [47], DMLab [48] and MineRL [49]. 3.2 Dead leaves image models The second family of models is Dead leaves, one of the simplest image models. We consider simple shapes (leaves) like circles, triangles and squares, which are positioned uniformly at random in the image canvas until it is covered. To produce each shape, we circumscribe it in a circle with a radius following an exponential distribution of parameter λ. This procedure has been shown to produce images that have similar statistics to natural images, such as having a 1/|f |α power spectrum [8] and non-gaussian distribution of derivatives and wavelet coefficients [50]. In this study we will consider four dead leaves models: Dead leaves - Squares: Only uses squares axis-aligned. Dead leaves - Oriented: Squares are randomly rotated. Dead leaves - Shapes: Leaves can be circles, triangles and rectangles. Dead leaves - Textured: uses square leaves filled with a texture sampled from the statistical image models described in the next section. 3.3 Statistical image models The third family of models is statistical image models with increasing complexity. Several generative models can be composed by using different combinations of properties. Spectrum: The magnitude of the Fourier transform of many natural images follows a power law, 1/|f |α, where α is a constant close to 1 [4]. In this generative model, we will sample random noise images constrained to have FT magnitude following 1/(|fx|a + |fy|b) with a and b being two random numbers uniformly sampled in the range [0.5, 3.5]. This model introduces a bias towards horizontal and vertical image orientations typical of natural images [51]. To generate color images we first sample three random orthogonal color directions and then generate power-law-noise on each channel independently. Samples from this model are shown in Fig. 2(i). Wavelet-marginal model (WMM): Following [52], we generate textures by modeling their histograms of wavelet coefficients. To produce a texture, we create marginal histograms for the coefficients ci at N scales (i ∈ {0...N −1}) and 4 orientations following a Generalized normal distribution centered at zero, thus p(ci) ∝ exp((−|ci|/αi)βi). Each scale i represents the image down-scaled by a factor of 2i, and the parameters αi and βi for each scale are αi = 42 i and βi ∼ 0.4 + U(0, 0.4). In practice we use N = 3 and N = 4 for generating 128 × 128 and 256 × 256 resolution images respectively. Once we have sampled a marginal distribution of wavelet coefficients for each of the three channels, we do histogram matching iteratively starting from a Gaussian noise image following [53]. Fig. 2(j) shows samples from this model. Color histograms: Here we take a generative model that follows the color distribution of the dead-leaves model. First we sample a number of regions N ∼ 3 + bU(0, 20)c, their relative sizes S ∼ 0.001 + U(0, 1) and color at uniform. This results in a color distribution different from uniform. Combining all these different models allows capturing color distributions, spectral components, and wavelet distributions that mimic those typical from natural images. Fig. 2(k) shows the result of sampling from a model that enforces random white noise to have the power-law spectrum and the color histogram according to this model. Fig. 2(l) shows samples from a model incorporating all of those properties (spectrum, color and WMM). Those models produce intriguing images but fail to capture the full richness of natural images as shown in Fig. 2(i-l). 3.4 Generative adversarial networks The fourth family of models is based on the architecture of GANs. Commonly, the parameters of GANs are trained to generate realistic samples of a given training distribution. Here, we do not use adversarial training or any training data. Instead, we explore different types of initializations, study the usefulness of the GAN architecture as a prior for image generation, and show that effective data generators can be formed by sampling the model parameters from simple prior distributions. We use an untrained StyleGANv2 [29], and modify its initialization procedure to obtain images with different characteristics. This results in four classes of StyleGAN initializations: StyleGAN-Random is the default initialization. Fig. 2(m) shows samples from this model. They lack high-frequency image content since the noise maps are not applied at initialization. StyleGAN-High-freq. In this model, we increase high-frequency image content by sampling the noise maps as 1/fα noise with α ∼ U (0.5, 2), which models the statistics of natural images [4]. Additionally, the convolutional filters on all layers are randomly sampled from a bank of 3×3 wavelet filters, and each sampled wavelet is multiplied by a random amplitude ∼ N (0, 1). Note that using Wavelets as spatial filters is a common practice when hand-designing networks [54, 55] and seems to well capture the underlying general structure of visual data. The samples in Fig. 2(n) show that this model generates high-frequency structures which are fairly uniformly distributed across the image. StyleGAN-Sparse. Natural images exhibit a high degree of sparsity. In this model, we increase the sparsity of the images through two modifications. First, we modulate the 1/fα noise maps using a Laplacian envelope. We sample a 4× 4 grid of i.i.d. Laplacian noise, resize it to the desired noise map resolution using bicubic upsampling and multiply this envelope with the original sampled 1/fα noise. Second, at each convolution, we add a random bias ∼ U (−0.2, 0.2), which, in conjunction with the nonlinearities, further increases sparsity. Fig. 2(o) shows that the images created by this model indeed appear sparser. Yet, they are still lacking discernible image structures. StyleGAN-Oriented. Oriented structures are a crucial component of natural images. We found that an effective way to introduce such structures to the previous models is to tie the wavelets, i.e. to use the same wavelet for all output channels. Under tied wavelets, the standard convolution becomes yk = ∑ l [ak,l (xl ? fl)] + bk, where yk denotes output channel k, xl denotes input channel l, bk is a bias term, ak,l ∼ N (0, 1) is a random amplitude multiplier and the wavelet fl depends only on the input channel, but is shared across all output channels. As can be seen in Fig. 2(p), this creates visible, oriented structures in the output images. 3.5 Feature visualizations The final class of models we study is feature visualizations [56]. CNN’s can be used to produce novel images by optimizing the output of single or multiple units with respect to the input image, which is initialized with noise. Although these methods are commonly used to interpret and visualize the internal representation of a network, here we can use it as an image generative process. Following the procedure in [57], we obtain feature visualizations by optimizing the value of a single or a combination of two units of a neural network. We select those units from the layer that is typically used as feature extractor (i.e. the output of the penultimate linear layer), as we have empirically found this to yield more image-like visualizations compared to shallower layers. We create two datasets using this technique. Feature vis. - Random: ResNet50 with the default random initialization, shown in Fig. 2(q) and, 2) Feature vis. - Dead leaves: ResNet50 trained with dead leaves with diverse shapes, using MoCo v2 [3] and 1.3M sampled images. Samples are shown in Fig. 2(r). 4 Experiments To study the proposed image generation processes, we train an AlexNet-based encoder using the Alignment and Uniformity loss proposed in [58], which is a form of contrastive loss theoretically equivalent to the popular InfoNCE loss [59]. We generate 105k samples using the proposed image models at 128× 128 resolution, which are then downsampled to 96× 96 and cropped at random to 64 × 64 before being fed to the encoder. After unsupervised training, we evaluate linear training performance (without finetuning) on the representation right before the projection layer, following standard practice [58, 59]. We fix a common set of hyperparameters for all the methods under test to the values found to perform well by the authors of [58]. Further details of the training are provided in the Sup.Mat. We evaluate performance using Imagenet-100 [60] and the Visual Task Adaptation Benchmark [61]. VTAB consists of 19 classification tasks which are grouped into three categories: a) Natural, consisting of images of the world taken with consumer cameras b) Specialized, consisting in images of specialized domains, such as medical or aerial photography and c) Structured, where the classification tasks require understanding specific properties like shapes or distances. For each of the datasets in VTAB, we fix the number of training and validation samples to 20k at random for the datasets where there are more samples available. As an upper-bound for the maximum expected performance with synthetic images, we consider the same training procedure but using the following real datasets: 1) Places365 [62] consisting of a wide set of classes, but a different domain 2) STL-10 [63], consisting of only 10 classes of natural images and 3) Imagenet1k [1], a superset of Imagenet100. As baselines we use mean image colors, raw pixels and features obtained by an untrained Alexnet (denoted CNN - Random). 4.1 Image model comparison Figures 3 and 4 show the performance for the proposed fully generative methods from noise on Imagenet100 and VTAB (Tables can be found in the Sup.Mat.). The results on both datasets show an increased performance for Natural dataset (Imagenet100 and the Natural tasks in VTAB) that match the qualitative complexity and diversity of samples as seen in Fig. 2. On the other hand, Structured and Specialized tasks do not benefit as much from natural data (as seen in the middle and right-most plots in Fig. 4), and our models perform similarly for the tasks under test in this setting. 4.2 Large-scale experiments Finally, we test one of the best-performing methods of each type on a large-scale experiment using a Resnet50 encoder instead of AlexNet. We generate 1.3M samples of the datasets at 256 × 256 resolution, and train using the procedure described in MoCo v2 [3] with the default hyperparameters for Imagenet1k (details of the training can be found in the Sup.Mat.). The results in Table 1 show that the relative performance for the experiments with the Alexnet-based encoder is approximately preserved (except for dead leaves, which underperforms FractalDB-1k). Despite using the MoCo v2 hyperparameters found to be good for Imagenet1k (which may not be optimal for our datasets) and not using any real data, our best performing model achieves 38.12% top-1 accuracy on linear classification on top of the learned features. Furthermore, we achieve 40.06% top-1 accuracy by combining four of our datasets: Dead leaves - Shapes, Feature Visualizations - Dead leaves, Statistical (Spectrum + Color + WMM) and StyleGAN - Oriented. Additionally, our image models allow training on arbitrary amounts of samples that can be generated on the fly. For StyleGAN - Oriented we found that training with the procedure described above but continuously sampling instead of using the fixed 1.3M images yields an improved top-1 accuracy of 38.94%. 5 What properties make for good generated data? As we have seen, representations learned from noise-like images can be surprisingly powerful. Why is this the case? Intuitively, one might think that the features learned from these datasets themselves resemble noise. Interestingly, this is not the case. Fig. 5 shows the diversity of feature activations learned with the different datasets, extracted using the same procedure as described in Section 3.5. Though some datasets fail to capture certain properties (e.g. the Dead leaves - Squares dataset only reacts to axis-aligned images), feature visualizations qualitatively show that our datasets contain sufficient structure to learn a variety of complex features. Yet, not all of our datasets reach the same performance, which raises the question of what makes a generated dataset good. Which properties are strongly correlated with performance? In the following, we investigate the statistical properties of individual images (5.1) and datasets as a whole (5.2) 5.1 Properties of individual images Across our datasets, the image appearance varies significantly (see Fig. 2): They contain different structures and shapes, different color profiles, different levels of “realism”, and different effects such as occlusions. How much of the difference in performance can be attributed to such low-level, per-image appearance characteristics? Color profile. The first and most basic question is how important the color characteristics of our datasets are, since even in such a basic dimension as color, the datasets differ greatly (see Fig. 2). To test the impact of color characteristics on performance, we measure color similarity between the test dataset (here, ImageNet-100) and each of our pretraining datasets. First, we extract L*a*b values from 50K images from all datasets and ImageNet-1003.The color distribution of each dataset can then be modeled as a three-dimensional Gaussian, and the color difference between two datasets can be computed as the symmetric KL divergence between those distributions. Fig. 6(a) shows that the color similarity of most of our datasets to natural images is fairly low; at the same time, we see a clear negative correlation between performance and color distance (r = −0.57). Image spectrum. It is well known that the spectrum of natural images can be modeled as a heavytailed function A/|f |α, with A a scaling factor and typically α ∈ [0.5, 2.0] [51]. How well do our datasets resemble these statistics, and how much does this impact performance? Fig. 6(b) shows that the statistics of most of our datasets fall within this range, except for Stylegan-default. However, while an exponent α ∈ [1.4, 1.7] seems to benefit performance, the correlation is weak. Image coherence. Contrastive Learning utilizes different views as positive pairs during training, which are commonly generated using random augmentations of the same input image. For this to work, the images need a degree of global coherence, i.e. two views of the same image should be more 3In this and the next section, all measures were computed on 50K random images from the respective datasets, and, where applicable, compared against the reference statistics of ImageNet-100. To avoid distortions, correlations are always computed without taking the datasets ImageNet-100 and ImageNet1k into account. similar than two views from different images. To test the amount of coherence in our datasets and how this impacts downstream accuracy, we first compute the average perceptual variation within a dataset as 1N ∑ LPIPS (f (In) , g (In)), where f, g are two random crops of the same image and LPIPS is the perceptual distance from [28]. Fig. 6(c) shows the results. Interestingly, there seems to be a sweet spot of perceptual variation around 0.37; for datasets with lower variation, perceptual variation is strongly correlated with accuracy (r = 0.75). Going beyond this sweet spot decreases accuracy, since now two augmentations of the same image are too dissimilar to provide good information. Finding a similar effect, [64] showed that there is a sweet spot for augmentation strength in contrastive learning. 5.2 Properties of the datasets Beyond the properties of individual images, it is important to ask which properties of a dataset as a whole explain its level of performance. The first obvious predictor for good performance on a given test task is the distance of the training data distribution to the test data. We can quantify this using the Fréchet Inception Distance [65], which measures the distance between two sets of images by fitting Gaussians to the distribution of Inception features and measuring the difference between those Gaussians. As shown in Fig. 7(a), FID is strongly negatively correlated (r = −.85) with accuracy. An ideal pretraining dataset would therefore resemble the test dataset as closely as possible, achieving a low FID as well as high precision and recall to the test dataset [66]. Interestingly, the trend we observe in our data points is in a different direction: Precision is negatively correlated with accuracy (r = −0.67), and the data shows a strong negative correlation between precision and recall (r = −0.88). This can be explained by the fact that our methods are not capable of perfectly reproducing the entirety of ImageNet, leaving two possible scenarios: Either a dataset is concentrated in a small part of the image manifold (images from CLEVR and DMLAB look naturalistic, yet are severely restricted in their diversity compared to natural images), or the dataset overshoots the manifold of test images, containing as much diversity as possible, in the limit encompassing the test dataset as a subset. Should a dataset be as realistic (i.e. maximize precision), or as diverse as possible? An extreme version of the first case would be a training dataset that is concentrated around a few points of the test dataset, which would achieve high precision but low recall, and generally have low diversity. Given a distribution of features, we can measure the diversity of the dataset as |Σ|, i.e. the determinant of the covariance matrix in this feature space. Here, we use the same inception features as for the FID (see above). Fig. 7(b) shows precision vs. log-volume for all datasets; color indicates the performance on ImageNet-100. As can be seen, datasets with a high precision tend to not be particularly diverse and are negatively correlated with log-volume (r = −0.84). As volume increases, precision decreases, yet performance benefits. Considering Recall, the picture changes. As shown in Fig. 7(c), higher recall is (after a certain point) positively correlated with both log-volume (r = .79) and accuracy (r = .83). Interestingly, this does not mean that precision is irrelevant – when controlling for recall, precision is again positively correlated with accuracy, albeit more weakly (r = .22). The reason for this behavior is that in case of our datasets, precision and recall are negatively correlated, and thus present a trade-off. If such a trade-off is necessary when designing datasets for pretraining, our results indicate that maximizing recall might be more important than maximizing precision. Interestingly, despite a completely different setting, the same effect was observed in [34]. 6 Conclusion Can we learn visual representations without using real images? The datasets presented in this paper are composed of images with different types of structured noise. We have shown that, when designed using results from past research on natural image statistics, these datasets can successfully train visual representations. We hope that this paper will motivate the study of new generative models capable of producing structured noise achieving even higher performance when used in a diverse set of visual tasks. Would it be possible to match the performance obtained with ImageNet pretraining? Maybe in the absence of a large training set specific to a particular task, the best pre-training might not be using a standard real dataset such as ImageNet. Although it might be tempting to believe that the datasets presented in this paper might reduce the impact of dataset bias, bias might still exist (although they might not come from social biases, they might still impact differently subsets of the test data) and it will be important to characterize it for each particular application. Acknowledgments: Manel Baradad was supported by the LaCaixa Fellowship and Jonas Wulff was supported by a grant from Intel Corp. This research was partially conducted using computation resources from the Satori cluster donated by IBM to MIT.
1. What are the strengths and weaknesses of the paper regarding its contribution to exploring synthetic data for deep neural network training? 2. How does the reviewer assess the diversity and relevance of the generative mechanisms investigated in the paper? 3. What are the limitations of the paper regarding its comparisons with other works and contextualization of results? 4. Do you have any suggestions for improving the paper by combining synthetic data with natural or real data? 5. How might the paper be expanded to provide a more comprehensive understanding of the potential of synthetic data approaches?
Summary Of The Paper Review
Summary Of The Paper The paper investigates the effectiveness of synthetic data generated via procedural noise processes as training data for deep neural networks. Specifically, the authors consider fractals, computer graphics, dead leave models, statistical image models, and untrained GAN generators, as well as combinations thereof, as mechanisms to generate data, and train a network in self-supervised fashion on these different types of data. They find that deep networks trained in this fashion significantly outperform randomly initialized networks, and are competitive with deep networks that are trained on large natural image data sets on specialized domains (not necessarily natural images). The paper further presents a suite of experiments that aims to relate the properties of the synthetic data (e.g. color distribution) with the downstream performance of models trained on it. Review Given the recent scaling trends of training deep networks on ever growing data sets, it is important to critically evaluate the benefits of large scale data. In that context the paper explores an interesting and highly relevant direction, questioning the necessity of large scale natural image data sets. The paper investigates a broad and diverse choice of generative mechanisms, and also explores sound metrics in the systematic analysis of the relation between data set properties and downstream predictive performance. To my knowledge this selection of data sets and metrics has not been explored before. The paper is well-written and mostly easy to follow. The following points could be improved: How do the synthetic data sets compare to the single-image training from [26]? How does training on synthetic data compare to early self-supervised approaches (i.e. non-contrastive approaches) for example self-supervised learning via rotation prediction (Gidaris et al. ICLR’18)? I found it difficult to put the results in the paper into context, and it is therefore unclear to me how big the potential of the synthetic data approaches in the paper is. Are there ways to combine synthetic data with natural/real data, possibly with labels (e.g. similar to S4L (Zhai et al. CVPR’19))? Since there seems to be quite some gap to models trained on natural images and it is unlikely to deploy models purely trained using the data generation procedures from the papers, it would be interesting to see whether the synthetic data could be useful to augment natural/real data. Intuitively I would expect that networks trained on synthetic data do well on low-level texture based tasks and struggle more on tasks requiring high level visual features. An investigation into this aspect would make the paper more complete. Furthermore, for parameterized models it would be interesting to see how individual parameters affect downstream performance. Minor comments: The different types of engineered filters described in Sec. 3.4. Maybe there is a way to present this more accessibly. Is the scale and cropping procedure (in particular the resolutions) described in L196 used throughout the paper? If so, is there an intuition why the patch size 64x64 is appropriate for training, assuming the testing resolution is higher? Overall, I think the paper explores a highly relevant direction. However, I feel that the paper should be extended along the axes outlined above.
NIPS
Title Learning to See by Looking at Noise Abstract Current vision systems are trained on huge datasets, and these datasets come with costs: curation is expensive, they inherit human biases, and there are concerns over privacy and usage rights. To counter these costs, interest has surged in learning from cheaper data sources, such as unlabeled images. In this paper, we go a step further and ask if we can do away with real image datasets entirely, by learning from procedural noise processes. We investigate a suite of image generation models that produce images from simple random processes. These are then used as training data for a visual representation learner with a contrastive loss. In particular, we study statistical image models, randomly initialized deep generative models, and procedural graphics models. Our findings show that it is important for the noise to capture certain structural properties of real data but that good performance can be achieved even with processes that are far from realistic. We also find that diversity is a key property for learning good representations. 1 Introduction The importance of data in modern computer vision is hard to overstate. Time and again we have seen that better models are empowered by bigger data. The ImageNet dataset [1], with its 1.4 million labeled images, is widely thought to have spurred the era of deep learning, and since then the scale of vision datasets has been increasing at a rapid pace; current models are trained on up to one billion images [2]. In this paper, we question the necessity of such massive training sets of real images. Instead, we investigate a suite of procedural noise models that generate images from simple random processes. These are then used as training data for a visual representation learner. We identify two key properties that make for good synthetic data for training vision systems: 1) naturalism, 2) diversity. Interestingly, the most naturalistic data is not always the best, since naturalism can come at the cost of diversity. The fact that naturalistic data help may not be surprising, and it suggests that indeed, large-scale real data has value. However, we find that what is crucial is not that the data be real but that it be naturalistic, i.e. it must capture certain structural properties of real data. Many of these properties can be captured in simple noise models (Fig. 1). The implications of our work are severalfold. First, our results call into question the true complexity of the problem of vision – if very short programs can generate and train a high-performing vision system, then vision may be simpler than we thought, and might not require huge data-driven systems to achieve adequate performance. Second, our methods open the door to training vision systems without reliance on datasets. The value in this is that datasets are encumbered by numerous costs: ∗Equal contribution 35th Conference on Neural Information Processing Systems (NeurIPS 2021). they may be expensive, biased, private, or simply intractable to collect. We do not argue for removing datasets from computer vision entirely (as real data might be required for evaluation), but rather reconsidering what can be done in their absence. 2 Related work 2.1 A short history of image generative models Out of the R3n2 dimensional space spanned by 3× n2 color images, natural images occupy a small part of that space, the rest is mostly filled by noise. During the last decades, researchers have studied the space of natural images to build models capable of compressing, denoising and generating images. The result of this research is a sequence of generative image models of increasing complexity narrowing down the space occupied by natural images within R3n2 . One surprising finding is that natural images follow a power law on the magnitude of their Fourier transform [4, 5]. This is the basis of Wiener image denoising [6] and scale-invariant models of natural images [7, 8]. Dead leaves model [8, 9] was an attempt at generating images that explained the power law found in natural images and inspired the Retinex algorithm [10]. The multiscale and self-similar nature of natural images inspired the use of fractals [11, 12, 13] as image models. Coding research for TV [5] and image modeling [6, 14, 15, 16] showed another remarkable property of natural images: the output values of zero mean wavelets to natural images are sparse and follow a generalized Laplacian distribution [6]. Color and intensity distributions in natural images have also been studied and found to follow rules that deviate from random noise [10, 17]. Research in texture synthesis showed how these statistical image models produced more realistic-looking textures [18, 19]. Those required fitting the image model parameters to specific images to sample more "like it". Recently, GANs [20] have shown remarkable image synthesis results [21]. Although GANs need real images to learn the network parameters, we show in this paper that they introduce a structural prior useful to encode image properties without requiring any training. 2.2 Training without real data and training with synthetic data Through the above progression, generative models have become increasingly complex, with more parameters and more training data needed to fit these parameters. The same has happened with vision systems in general: state-of-the-art systems like BiT [22], CLIP [23], and SEER [2] obtain their best results on 300 million, 400 million, and 1 billion images respectively. These papers further show that such large data is critical to getting the best performance. While this may be true, other work has shown that much smaller data is sufficient to already get decent performance. A single image can be enough to train, from scratch, a compelling generative model [24, 25] or visual representation [26], and, even with no training data at all, deep architectures already encode useful image priors that can be exploited for low-level vision tasks [27] or for measuring perceptual similarity [28]. Our results, using an untrained StyleGANv2 [29] to generate training data, further affirm the utility of the structural priors in neural net architectures. An alternative to training with real data is to train on synthetic data. This approach has been widely used in low-level tasks like depth, stereo, or optical flow estimation [30, 31, 32, 33], where 3D rendering engines can provide densely annotated data to learn from. Interestingly, for this class of tasks diversity is more important than realism [34], making procedurally generated scenes an effective alternative to renderings designed by professional 3D artists [35, 36]. 2Answers: 1,3,4,5,6,8,14 are from ImageNet images. Recent work has also investigated using deep generative models as a source of synthetic data to train classifiers [37, 38] and visual representations [39], or to generate synthetic annotated data for other downstream tasks [40, 41, 42]. However, these generative models are still fit to real image datasets and produce realistic-looking images as samples. In this paper we push even further away from realism, generating synthetic data from simple noise processes. The closest prior work in this direction is the pioneering work of [43], which used automatically generated fractals to pre-train neural networks that converge faster than their randomly initialized counterparts. While they demonstrated that fractals can be effective for pre-training, there is still a large gap compared to pre-training on real data. We explore a much broader range of noise processes, including many classic models from the image coding and texture synthesis literature. The use of randomized training data has also been explored under the heading of domain randomization [44], where 3D synthetic data is rendered under a variety of lighting conditions to transfer to real environments where the lighting may be unknown. Our approach can be viewed as an extreme form of domain randomization that does away with the simulation engine entirely: make the training data so diverse that a natural image will just look like a sample from the noise process. There is some evidence that biology takes a similar approach during the prenatal development of the vision system. “Retinal waves" – spontaneous, semi-random activations of the retina – are thought to entrain edge detectors and other simple structures in the developing mammalian brain [45]. 3 A progression of image generative models Here we provide the details for the image models we will use in this paper. We test a suite of generative models of the form gθ : z→ x, where z are stochastic latent variables and x is an image. We will treat image generation as a hierarchical process in which first the parameters of a model, θ, are sampled, and then the image is sampled given these parameters and stochastic noise. The parameters θ define properties of the distribution from which we will sample images, for example, the mean color of the image. The sampling process is as follows: θ ∼ p(θ), z ∼ p(z), and x = gθ(z), which corresponds to sampling images from the distribution p(x, θ) = p(x|θ)p(θ). The parameters θ that define the model may be fit to real data or not. We will explore the case where the parameters are not fit to real data but instead sampled from simple prior distributions. Next, we describe the generative image models that we will evaluate in this paper (Fig. 2). 3.1 Procedural image models The first class of models belong to the family of procedural image models. Procedural models are capable of generating very realistic-looking images in specific image domains. We include in this set also fractals, although they could make a class on their own. Fractals: Fractals have been shown to capture geometric properties of elements found in nature [46]. Consequently, image models consisting of renderings of human-designed shapes with fractal structure [43] are likely to reproduce patterns found in natural images. CG: Simulators and game engines rely on a mixture of human-driven design and procedural methods to generate environments simulating real-world properties such as illumination, 3D, and semantics. Here we include three CG models popular in computer vision with available datasets: CLEVR [47], DMLab [48] and MineRL [49]. 3.2 Dead leaves image models The second family of models is Dead leaves, one of the simplest image models. We consider simple shapes (leaves) like circles, triangles and squares, which are positioned uniformly at random in the image canvas until it is covered. To produce each shape, we circumscribe it in a circle with a radius following an exponential distribution of parameter λ. This procedure has been shown to produce images that have similar statistics to natural images, such as having a 1/|f |α power spectrum [8] and non-gaussian distribution of derivatives and wavelet coefficients [50]. In this study we will consider four dead leaves models: Dead leaves - Squares: Only uses squares axis-aligned. Dead leaves - Oriented: Squares are randomly rotated. Dead leaves - Shapes: Leaves can be circles, triangles and rectangles. Dead leaves - Textured: uses square leaves filled with a texture sampled from the statistical image models described in the next section. 3.3 Statistical image models The third family of models is statistical image models with increasing complexity. Several generative models can be composed by using different combinations of properties. Spectrum: The magnitude of the Fourier transform of many natural images follows a power law, 1/|f |α, where α is a constant close to 1 [4]. In this generative model, we will sample random noise images constrained to have FT magnitude following 1/(|fx|a + |fy|b) with a and b being two random numbers uniformly sampled in the range [0.5, 3.5]. This model introduces a bias towards horizontal and vertical image orientations typical of natural images [51]. To generate color images we first sample three random orthogonal color directions and then generate power-law-noise on each channel independently. Samples from this model are shown in Fig. 2(i). Wavelet-marginal model (WMM): Following [52], we generate textures by modeling their histograms of wavelet coefficients. To produce a texture, we create marginal histograms for the coefficients ci at N scales (i ∈ {0...N −1}) and 4 orientations following a Generalized normal distribution centered at zero, thus p(ci) ∝ exp((−|ci|/αi)βi). Each scale i represents the image down-scaled by a factor of 2i, and the parameters αi and βi for each scale are αi = 42 i and βi ∼ 0.4 + U(0, 0.4). In practice we use N = 3 and N = 4 for generating 128 × 128 and 256 × 256 resolution images respectively. Once we have sampled a marginal distribution of wavelet coefficients for each of the three channels, we do histogram matching iteratively starting from a Gaussian noise image following [53]. Fig. 2(j) shows samples from this model. Color histograms: Here we take a generative model that follows the color distribution of the dead-leaves model. First we sample a number of regions N ∼ 3 + bU(0, 20)c, their relative sizes S ∼ 0.001 + U(0, 1) and color at uniform. This results in a color distribution different from uniform. Combining all these different models allows capturing color distributions, spectral components, and wavelet distributions that mimic those typical from natural images. Fig. 2(k) shows the result of sampling from a model that enforces random white noise to have the power-law spectrum and the color histogram according to this model. Fig. 2(l) shows samples from a model incorporating all of those properties (spectrum, color and WMM). Those models produce intriguing images but fail to capture the full richness of natural images as shown in Fig. 2(i-l). 3.4 Generative adversarial networks The fourth family of models is based on the architecture of GANs. Commonly, the parameters of GANs are trained to generate realistic samples of a given training distribution. Here, we do not use adversarial training or any training data. Instead, we explore different types of initializations, study the usefulness of the GAN architecture as a prior for image generation, and show that effective data generators can be formed by sampling the model parameters from simple prior distributions. We use an untrained StyleGANv2 [29], and modify its initialization procedure to obtain images with different characteristics. This results in four classes of StyleGAN initializations: StyleGAN-Random is the default initialization. Fig. 2(m) shows samples from this model. They lack high-frequency image content since the noise maps are not applied at initialization. StyleGAN-High-freq. In this model, we increase high-frequency image content by sampling the noise maps as 1/fα noise with α ∼ U (0.5, 2), which models the statistics of natural images [4]. Additionally, the convolutional filters on all layers are randomly sampled from a bank of 3×3 wavelet filters, and each sampled wavelet is multiplied by a random amplitude ∼ N (0, 1). Note that using Wavelets as spatial filters is a common practice when hand-designing networks [54, 55] and seems to well capture the underlying general structure of visual data. The samples in Fig. 2(n) show that this model generates high-frequency structures which are fairly uniformly distributed across the image. StyleGAN-Sparse. Natural images exhibit a high degree of sparsity. In this model, we increase the sparsity of the images through two modifications. First, we modulate the 1/fα noise maps using a Laplacian envelope. We sample a 4× 4 grid of i.i.d. Laplacian noise, resize it to the desired noise map resolution using bicubic upsampling and multiply this envelope with the original sampled 1/fα noise. Second, at each convolution, we add a random bias ∼ U (−0.2, 0.2), which, in conjunction with the nonlinearities, further increases sparsity. Fig. 2(o) shows that the images created by this model indeed appear sparser. Yet, they are still lacking discernible image structures. StyleGAN-Oriented. Oriented structures are a crucial component of natural images. We found that an effective way to introduce such structures to the previous models is to tie the wavelets, i.e. to use the same wavelet for all output channels. Under tied wavelets, the standard convolution becomes yk = ∑ l [ak,l (xl ? fl)] + bk, where yk denotes output channel k, xl denotes input channel l, bk is a bias term, ak,l ∼ N (0, 1) is a random amplitude multiplier and the wavelet fl depends only on the input channel, but is shared across all output channels. As can be seen in Fig. 2(p), this creates visible, oriented structures in the output images. 3.5 Feature visualizations The final class of models we study is feature visualizations [56]. CNN’s can be used to produce novel images by optimizing the output of single or multiple units with respect to the input image, which is initialized with noise. Although these methods are commonly used to interpret and visualize the internal representation of a network, here we can use it as an image generative process. Following the procedure in [57], we obtain feature visualizations by optimizing the value of a single or a combination of two units of a neural network. We select those units from the layer that is typically used as feature extractor (i.e. the output of the penultimate linear layer), as we have empirically found this to yield more image-like visualizations compared to shallower layers. We create two datasets using this technique. Feature vis. - Random: ResNet50 with the default random initialization, shown in Fig. 2(q) and, 2) Feature vis. - Dead leaves: ResNet50 trained with dead leaves with diverse shapes, using MoCo v2 [3] and 1.3M sampled images. Samples are shown in Fig. 2(r). 4 Experiments To study the proposed image generation processes, we train an AlexNet-based encoder using the Alignment and Uniformity loss proposed in [58], which is a form of contrastive loss theoretically equivalent to the popular InfoNCE loss [59]. We generate 105k samples using the proposed image models at 128× 128 resolution, which are then downsampled to 96× 96 and cropped at random to 64 × 64 before being fed to the encoder. After unsupervised training, we evaluate linear training performance (without finetuning) on the representation right before the projection layer, following standard practice [58, 59]. We fix a common set of hyperparameters for all the methods under test to the values found to perform well by the authors of [58]. Further details of the training are provided in the Sup.Mat. We evaluate performance using Imagenet-100 [60] and the Visual Task Adaptation Benchmark [61]. VTAB consists of 19 classification tasks which are grouped into three categories: a) Natural, consisting of images of the world taken with consumer cameras b) Specialized, consisting in images of specialized domains, such as medical or aerial photography and c) Structured, where the classification tasks require understanding specific properties like shapes or distances. For each of the datasets in VTAB, we fix the number of training and validation samples to 20k at random for the datasets where there are more samples available. As an upper-bound for the maximum expected performance with synthetic images, we consider the same training procedure but using the following real datasets: 1) Places365 [62] consisting of a wide set of classes, but a different domain 2) STL-10 [63], consisting of only 10 classes of natural images and 3) Imagenet1k [1], a superset of Imagenet100. As baselines we use mean image colors, raw pixels and features obtained by an untrained Alexnet (denoted CNN - Random). 4.1 Image model comparison Figures 3 and 4 show the performance for the proposed fully generative methods from noise on Imagenet100 and VTAB (Tables can be found in the Sup.Mat.). The results on both datasets show an increased performance for Natural dataset (Imagenet100 and the Natural tasks in VTAB) that match the qualitative complexity and diversity of samples as seen in Fig. 2. On the other hand, Structured and Specialized tasks do not benefit as much from natural data (as seen in the middle and right-most plots in Fig. 4), and our models perform similarly for the tasks under test in this setting. 4.2 Large-scale experiments Finally, we test one of the best-performing methods of each type on a large-scale experiment using a Resnet50 encoder instead of AlexNet. We generate 1.3M samples of the datasets at 256 × 256 resolution, and train using the procedure described in MoCo v2 [3] with the default hyperparameters for Imagenet1k (details of the training can be found in the Sup.Mat.). The results in Table 1 show that the relative performance for the experiments with the Alexnet-based encoder is approximately preserved (except for dead leaves, which underperforms FractalDB-1k). Despite using the MoCo v2 hyperparameters found to be good for Imagenet1k (which may not be optimal for our datasets) and not using any real data, our best performing model achieves 38.12% top-1 accuracy on linear classification on top of the learned features. Furthermore, we achieve 40.06% top-1 accuracy by combining four of our datasets: Dead leaves - Shapes, Feature Visualizations - Dead leaves, Statistical (Spectrum + Color + WMM) and StyleGAN - Oriented. Additionally, our image models allow training on arbitrary amounts of samples that can be generated on the fly. For StyleGAN - Oriented we found that training with the procedure described above but continuously sampling instead of using the fixed 1.3M images yields an improved top-1 accuracy of 38.94%. 5 What properties make for good generated data? As we have seen, representations learned from noise-like images can be surprisingly powerful. Why is this the case? Intuitively, one might think that the features learned from these datasets themselves resemble noise. Interestingly, this is not the case. Fig. 5 shows the diversity of feature activations learned with the different datasets, extracted using the same procedure as described in Section 3.5. Though some datasets fail to capture certain properties (e.g. the Dead leaves - Squares dataset only reacts to axis-aligned images), feature visualizations qualitatively show that our datasets contain sufficient structure to learn a variety of complex features. Yet, not all of our datasets reach the same performance, which raises the question of what makes a generated dataset good. Which properties are strongly correlated with performance? In the following, we investigate the statistical properties of individual images (5.1) and datasets as a whole (5.2) 5.1 Properties of individual images Across our datasets, the image appearance varies significantly (see Fig. 2): They contain different structures and shapes, different color profiles, different levels of “realism”, and different effects such as occlusions. How much of the difference in performance can be attributed to such low-level, per-image appearance characteristics? Color profile. The first and most basic question is how important the color characteristics of our datasets are, since even in such a basic dimension as color, the datasets differ greatly (see Fig. 2). To test the impact of color characteristics on performance, we measure color similarity between the test dataset (here, ImageNet-100) and each of our pretraining datasets. First, we extract L*a*b values from 50K images from all datasets and ImageNet-1003.The color distribution of each dataset can then be modeled as a three-dimensional Gaussian, and the color difference between two datasets can be computed as the symmetric KL divergence between those distributions. Fig. 6(a) shows that the color similarity of most of our datasets to natural images is fairly low; at the same time, we see a clear negative correlation between performance and color distance (r = −0.57). Image spectrum. It is well known that the spectrum of natural images can be modeled as a heavytailed function A/|f |α, with A a scaling factor and typically α ∈ [0.5, 2.0] [51]. How well do our datasets resemble these statistics, and how much does this impact performance? Fig. 6(b) shows that the statistics of most of our datasets fall within this range, except for Stylegan-default. However, while an exponent α ∈ [1.4, 1.7] seems to benefit performance, the correlation is weak. Image coherence. Contrastive Learning utilizes different views as positive pairs during training, which are commonly generated using random augmentations of the same input image. For this to work, the images need a degree of global coherence, i.e. two views of the same image should be more 3In this and the next section, all measures were computed on 50K random images from the respective datasets, and, where applicable, compared against the reference statistics of ImageNet-100. To avoid distortions, correlations are always computed without taking the datasets ImageNet-100 and ImageNet1k into account. similar than two views from different images. To test the amount of coherence in our datasets and how this impacts downstream accuracy, we first compute the average perceptual variation within a dataset as 1N ∑ LPIPS (f (In) , g (In)), where f, g are two random crops of the same image and LPIPS is the perceptual distance from [28]. Fig. 6(c) shows the results. Interestingly, there seems to be a sweet spot of perceptual variation around 0.37; for datasets with lower variation, perceptual variation is strongly correlated with accuracy (r = 0.75). Going beyond this sweet spot decreases accuracy, since now two augmentations of the same image are too dissimilar to provide good information. Finding a similar effect, [64] showed that there is a sweet spot for augmentation strength in contrastive learning. 5.2 Properties of the datasets Beyond the properties of individual images, it is important to ask which properties of a dataset as a whole explain its level of performance. The first obvious predictor for good performance on a given test task is the distance of the training data distribution to the test data. We can quantify this using the Fréchet Inception Distance [65], which measures the distance between two sets of images by fitting Gaussians to the distribution of Inception features and measuring the difference between those Gaussians. As shown in Fig. 7(a), FID is strongly negatively correlated (r = −.85) with accuracy. An ideal pretraining dataset would therefore resemble the test dataset as closely as possible, achieving a low FID as well as high precision and recall to the test dataset [66]. Interestingly, the trend we observe in our data points is in a different direction: Precision is negatively correlated with accuracy (r = −0.67), and the data shows a strong negative correlation between precision and recall (r = −0.88). This can be explained by the fact that our methods are not capable of perfectly reproducing the entirety of ImageNet, leaving two possible scenarios: Either a dataset is concentrated in a small part of the image manifold (images from CLEVR and DMLAB look naturalistic, yet are severely restricted in their diversity compared to natural images), or the dataset overshoots the manifold of test images, containing as much diversity as possible, in the limit encompassing the test dataset as a subset. Should a dataset be as realistic (i.e. maximize precision), or as diverse as possible? An extreme version of the first case would be a training dataset that is concentrated around a few points of the test dataset, which would achieve high precision but low recall, and generally have low diversity. Given a distribution of features, we can measure the diversity of the dataset as |Σ|, i.e. the determinant of the covariance matrix in this feature space. Here, we use the same inception features as for the FID (see above). Fig. 7(b) shows precision vs. log-volume for all datasets; color indicates the performance on ImageNet-100. As can be seen, datasets with a high precision tend to not be particularly diverse and are negatively correlated with log-volume (r = −0.84). As volume increases, precision decreases, yet performance benefits. Considering Recall, the picture changes. As shown in Fig. 7(c), higher recall is (after a certain point) positively correlated with both log-volume (r = .79) and accuracy (r = .83). Interestingly, this does not mean that precision is irrelevant – when controlling for recall, precision is again positively correlated with accuracy, albeit more weakly (r = .22). The reason for this behavior is that in case of our datasets, precision and recall are negatively correlated, and thus present a trade-off. If such a trade-off is necessary when designing datasets for pretraining, our results indicate that maximizing recall might be more important than maximizing precision. Interestingly, despite a completely different setting, the same effect was observed in [34]. 6 Conclusion Can we learn visual representations without using real images? The datasets presented in this paper are composed of images with different types of structured noise. We have shown that, when designed using results from past research on natural image statistics, these datasets can successfully train visual representations. We hope that this paper will motivate the study of new generative models capable of producing structured noise achieving even higher performance when used in a diverse set of visual tasks. Would it be possible to match the performance obtained with ImageNet pretraining? Maybe in the absence of a large training set specific to a particular task, the best pre-training might not be using a standard real dataset such as ImageNet. Although it might be tempting to believe that the datasets presented in this paper might reduce the impact of dataset bias, bias might still exist (although they might not come from social biases, they might still impact differently subsets of the test data) and it will be important to characterize it for each particular application. Acknowledgments: Manel Baradad was supported by the LaCaixa Fellowship and Jonas Wulff was supported by a grant from Intel Corp. This research was partially conducted using computation resources from the Satori cluster donated by IBM to MIT.
1. What is the main contribution of the paper regarding deep neural networks and computer vision applications? 2. What are the strengths of the paper, particularly in its analysis and evaluation? 3. What are the weaknesses of the paper, specifically regarding the experiments? 4. How does the reviewer assess the clarity and focus of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The paper studies whether real natural images are necessary to train deep neural networks for computer vision applications. To that end, the authors create datasets from a large variety of synthetic image generation processes and evaluate the performance of neural networks trained using these synthetic images. In a comprehensive analysis the authors demonstrate that neural networks trained on synthetic images obtain surprisingly good results and also identify key properties of synthetic datasets that lead to good downstream performance. Review Strengths As the authors correctly point out, recent advances in neural networks for visual recognition increasingly depend on larger and larger datasets. The apparent need for massive training sets raises many challenges as those training sets tend to be proprietary and data at this scale is incredibly difficult to obtain. As a consequence, methods that reduce the need for large datasets are of great interest to both practitioners as well as researchers. As such this paper is of great interest to the research community. The analysis in this paper is outlined very clearly and the evaluation is comprehensive. Furthermore, the discussion and presentation of the results is focused and provides clear insights to the reader. Weaknesses One experiment that seems to be missing from the analysis are models that are trained on mixtures of datasets. In the paper each model is only trained on images generated from a single image generation process. As the authors point out, recall is very important for downstream performance. Could mixing multiple datasets lead to better coverage and thus downstream performance? Minor comments: Figure 7: For consistency, I suggest to also encode the accuracy of the baselines by color.
NIPS
Title Deep RGB-D Canonical Correlation Analysis For Sparse Depth Completion Abstract In this paper, we propose our Correlation For Completion Network (CFCNet), an end-to-end deep learning model that uses the correlation between two data sources to perform sparse depth completion. CFCNet learns to capture, to the largest extent, the semantically correlated features between RGB and depth information. Through pairs of image pixels and the visible measurements in a sparse depth map, CFCNet facilitates feature-level mutual transformation of different data sources. Such a transformation enables CFCNet to predict features and reconstruct data of missing depth measurements according to their corresponding, transformed RGB features. We extend canonical correlation analysis to a 2D domain and formulate it as one of our training objectives (i.e. 2d deep canonical correlation, or “2D2CCA loss"). Extensive experiments validate the ability and flexibility of our CFCNet compared to the state-of-the-art methods on both indoor and outdoor scenes with different reallife sparse patterns. Codes are available at: https://github.com/choyingw/CFCNet. 1 Introduction Depth measurements are widely used in computer vision applications [1, 2, 3]. However, most of the existing techniques for depth capture produce depth maps with incomplete data. For example, structured-light cameras cannot capture depth measurements where surfaces are too shiny; Visual Simultaneous Localization And Mappings (VSLAMs) are not able to recover depth of non-textured objects; LiDARs produce semi-dense depth map due to the limited scanlines and scanning frequency. Recently, researchers have introduced the sparse depth completion task, aiming to fill missing depth measurements using deep learning based methods [4, 5, 6, 7, 8, 9, 10]. Those studies produce dense depth maps by fusing features of sparse depth measurements and corresponding RGB images. However, they usually treat feature extraction of these two types of information as independent processes, which in reality turns the task they work on into "multi-modality depth prediction" rather than "depth completion." While the multi-modality depth prediction may produce dense outputs, they fail to fully utilize observable data. The depth completion task is unique in that part of its output is already observable in the input. Revealing the relationship between data pairs (i.e. between observable depth measurements and the corresponding image pixels) may help complete depth maps by emphasizing the information from image domain at the locations where the depth values are non-observable. ∗Both authors contributed equally to this work. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. To accomplish the depth completion task from a novel perspective, we propose an end-to-end deep learning based framework, Correlation For Completion Network (CFCNet). We view a completed dense depth map as composed of two parts. One is the sparse depth which is observable and used as the input, another is non-observable and recovered by the task. Also, the corresponding full RGB image of the depth map can be decomposed into two parts, one is called the sparse RGB, which holds the corresponding RGB values at the observable locations in the sparse depth. The other part is complementary RGB, which is the subtraction of the sparse RGB from the full RGB images. See Figure 2 for examples. During the training phase, CFCNet learns the relationship between sparse depth and sparse RGB and uses the learned knowledge to recover non-observable depth from complementary RGB. To learn the relationship between two modalities, we propose a 2D deep canonical correlation analysis (2D2CCA). In the proposed method, our 2D2CCA tries to learn non-linear projections where the projected features from RGB and depth domain are maximally correlated. Using 2D2CCA as an objective function, we could capture the semantically correlated features from the RGB and depth domain. In this fashion, we utilize the relationship of observable depth and its corresponding nonobservable locations of the RGB input. We then use the joint information learned from the input data pairs to output a dense depth map. The pipeline of our CFCNet is shown in Figure 2. Details of our method are described in Section 3. The main contributions of CFCNet can be summarized as follows. • Constructing a framework for the sparse depth completion task which leverages the relationship between sparse depth and its corresponding RGB image, using the complementary RGB information to complement the missing sparse depth information. • Proposing the 2D2CCA which forces feature encoders to extract the most similar semantics from multiple modalities. Our CFCNet is the first to apply the two-dimensional approach in CCA with deep learning studies. It overcomes the small sample size problem in other CCA based deep learning frameworks on modern computer vision tasks. • Achieving state-of-the-art of the depth completion on several datasets with a variety of sparse patterns that serve real-world settings. 2 Related Work Sparse Depth Completion is a task that targets at dense depth completion from sparse depth measurements and a corresponding RGB image. The nature of sparse depth measurements varies across scenarios and sensors. Sparse depth generated by the stereo method contains more information on object contours and less information on non-textured areas [11]. LiDAR sensors produce structured sparsity due to the scanning behavior [12]. Feature based SLAM systems (such as ORB SLAM [13]) only capture depth information at the positions of corresponding feature points. Besides these most popular three patterns, some other patterns have also been studied. For instance, [14] uses a line pattern to simulate partial observations from laser systems; [15] culls the depth data of shiny surfaces area out of the dense depth map to mimic commodity depth cameras’ output. [8] uses uniform grid patterns. The latter appears a simplified and artificial pattern. Real-life situations require a more practical tool. As for input sparsity, [4] stacks sparse depth maps and corresponding RGB images together to build a four-channel (RGB-D) input before fed into a ResNet based depth estimation network. This treatment produces better results than monocular depth estimation with only RGB images. Other studies involve a two-branch encoder-decoder based framework which is similar to those used in RGB-D segmentation tasks [9, 10, 16, 17]. Their approaches do not apply special treatments to the sparse depth branch. They work well on the dataset where sparsity is not extremely severe, e.g. KITTI depth completion benchmark [6]. In most of the two-branch frameworks, features from different sources are extracted independently and fused through direct concatenations or additions, or using features from RGB branch to provide an extra guidance to refine depth prediction results. Canonical Correlation Analysis is a standard statistical technique for learning the shared subspace across several original data spaces. For two modalities, from the shared subspace, each representation is the most predictive to the other representation and the most predictable by the other [18, 19]. To overcome the constraints of traditional CCA where the projections must be linear, deep canonical correlation analysis (DCCA) [20, 21] has been proposed. DCCA uses deep neural network to learn more complex non-linear projections between multiple modalities. CCA, DCCA, and other variants have been widely used on multi-modal representation learning problems [22, 23, 24, 25, 26, 27, 28]. The one-dimensional CCA method suffers from the singularity problem of covariance matrices in the case of high-dimensional space with small sample size (SSS). Existing works have extended CCA to a two-dimensional way to avoid the SSS problem. [29, 30, 31] use a similar approach to building full-rank covariance matrices inspired by 2DPCA [32] and 2DLDA [33] on the face recognition task. However, those studies do not approximate complex non-linear projections as [20, 21] attempt. Our CFCNet is the first to integrate two-dimensional CCA into deep learning frameworks to overcome the intrinsic problem of applying DCCA to modern computer vision tasks, detailed in Section 3.2. 3 Our Approach Our goal is to leverage the relationship of the sparse depth and their corresponding pixels in RGB images in order to optimize the performance of the depth completion task. We try to complement the missing depth components using cues from RGB domain. Since CCA could learn the shared subspace with its predictive characteristics, we estimate the missing depth component using features from RGB domain through CCA. However, traditional CCA has SSS problem in modern computer vision task, detailed in Section 3.2. We further propose the 2D2CCA to capture similar semantics from both RGB/depth encoders. After encoders learning the semantically similar features, we use a transformer network to transform features from RGB to depth domain. This design not only enables the reconstruction of missing depth features from complementary RGB information but also ensures semantics similarity and the same numerical range of the two data sources. Based on this structure, the decoder in CFCNet is capable of using the reconstructed depth features along with the observable depth features to recover the dense depth map. 3.1 Network Architecture Proposed CFCNet structure is in Figure 2. CFCNet takes in sparse depth map, sparse RGB, and complementary RGB. We use our Sparsity-aware Attentional Convolutions (SAConv, as shown in Figure 3) in VGG16-like encoders. SAConv is inspired by local attention mask [34]. Harley et al. [34] introduces the segmentation-aware mask to let convolution operators "focus" on the signals consistent with the segmentation mask. In order to propagate information from reliable sources, we use sparsity masks to make convolution operations attend on the signals from reliable locations. Difference of our SAConv and the local attention mask is that SAConv does not apply mask normalization. We avoid mask normalization because it affect the stability of our later 2D2CCA calculations due to the numerically small extracted features it produces after several times normalization. Also, similar to [6], we use maxpooling operation on masks after every SAConv to keep track of the visibility. If there is at least one nonzero value visible to a convolutional kernel, the maxpooling would evaluate the value at the position to 1. Most multi-modal deep learning approaches simply concatenate or elementwisely add bottleneck features. However, when the extracted semantics and range of feature value differs among elements, direct concatenation and addition on multi-modal data source would not always yield better performance than single-modal data source, as seen in [35, 17]. To avoid this problem. We use encoders to extract higher-level semantics from two branches. We propose 2D2CCA, detailed in 3.2, to ensure the extracted features from two branches are maximally correlated. The intuition is that we want to capture the same semantics from the RGB and depth domains. Next, we use a transformer network to transform extracted features from RGB domain to depth domain, making extracted features from different sources share the same numerical range. During the training phase, we use features of sparse depth and corresponding sparse RGB image to calculate the 2D2CCA loss and transformer loss. We use a symmetric decoder structure to decode the embedded features. For the input, we concatenate the sparse depth features with the reconstructed missing depth features. The reconstructed missing depth features are extracted from complementary RGB image through the RGB encoder and the transformer. To ensure single-stage training, we adopt weight-sharing strategies as shown in Figure 2. 3.2 2D Deep Canonical Correlation Analysis (2D2CCA) Existing CCA based techniques introduced in Section 2 have limitations in modern computer vision tasks. Since modern computer vision studies usually use very deep networks to extract information from images of relatively large resolution, the batch size is limited by GPU-memory use. Meanwhile, the latent feature representations in networks are high-dimensional, since the batch size is limited, using DCCA with one-dimensional vector representation would lead to SSS problem. Therefore, We propose a novel 2D deep canonical correlation analysis(2D2CCA) to overcome the limitations. We denote the completed depth map as D with its corresponding RGB image as I. Sparse depth map in the input and the corresponding sparse RGB image are denoted as sD and sI. RGB/Depth encoders are denoted as fI and fD where the parameters of the encoders are denoted as θI and θD respectively. As described in Section 3.1, fI and fD use the SAConv to propagate information from reliable points to extract features from sparse inputs. We generate 3D feature grids embedding pair ( FsD ∈ Rm×n×C, FsI ∈ Rm×n×C) for each sparse depth map/image pair (sD,sI) by defining FsD = fD (sD;θD) and FsI = fI (sI;θI). Inside each feature grid pair, there are C feature map pairs( FisD ∈ Rm×n,FisI ∈ Rm×n ) ,∀i <C, and C = 512 in our network. Rather than analyzing the global correlation between any possible pairs of (FisD,F j sI), ∀i 6= j, we analyze the channelwise canonical correlation between the same channel number ( FisD,F i sI ) . This channelwise correlation analysis will result in getting features with similar semantic meanings for each modality, as shown in Figure 6, which guides fI to embed more valuable information related to depth completion. Using 1-dimensional feature representation would lead to SSS problem in modern deep learning based computer vision task. We introduce the 2-dimensional approach similar to [32] to generate full-rank covariance matrix Σ̂sD,sI ∈ Rm×n, which is calculated as Σ̂sD,sI = 1 C C−1 ∑ i=0 [ FisD−E [FsD] ][ FisI−E [FsI ] ]T , (1) in which we define E [F ] = 1C ∑ C−1 i=0 F i. Besides, we generate covariance matrices Σ̂sD (and respective Σ̂sI) with the regularization constant r1 and identity matrix I as Σ̂sD = 1 C C−1 ∑ i=0 [ FisD−E [FsD] ][ FisD−E [FsD] ]T + r1I. (2) The correlation between FsD and FsI is calculated as corr(FsD,FsI) = ∥∥∥∥(Σ̂− 12sD )(Σ̂sD,sI)(Σ̂− 12sI )∥∥∥∥ tr . (3) The higher value of corr(FsD,FsI) represents the higher correlation between two feature blocks. Since corr(FsD,FsI) is an non-negative scalar, we use−corr(FsD,FsI) as the optimization objective to guide training of two feature encoders. To compute the gradient of corr(FsD,FsI) with respect to θD and θI , we can compute its gradient with respect to FsD and FsI and then do the back propagation. The detail is showed following. Regarding to the gradient computation, we define M = (Σ̂− 1 2 sD )(Σ̂sD,sI)(Σ̂ − 12 sI ) and decompose M as M = USVT using SVD decomposition. Then we define ∂corr(FsD,FsI) ∂FsI = 1 C (2∇sDsDFsD +∇sDsIFsI) , (4) where ∇sDsI = Σ̂ − 12 sD UV T Σ̂ − 12 sRGB and ∇sDsD = − 1 2 Σ̂ − 12 sD UDU T Σ̂ − 12 sD . ∂corr(FsD,FsI) ∂FsD follows the similar calculations as ∂corr(FsD,FsI)∂FsI in Equation (4). 3.3 Loss Function We denote our channelwise 2D2CCA loss as L2D2CCA =−corr(FsD,FsI). We denote the transformed component from sparse RGB to depth domain as F̂sD. The transformer loss describes the numerical similarity between RGB and depth domain. We use L2 norm to measure the numerical similarity. Our transformer loss is Ltrans = ‖FsD− F̂sD‖22. We also build another encoder and another transformer network which share weights with the encoder and transformer network for the spare RGB. The input of the encoder is a complementary RGB image. We use features extracted from complementary RGB image to predict features of non-observable depth using transformer network. For the complementary RGB image, we denote the extracted feature and transformed component as FcI and F̂cD. Later, we concatenate FsD and F̂cD, both of which are 512-channel. We got an 1024-channel bottleneck feature on depth domain. We pass this bottleneck feature into the decoder described in Section 3.1. The output from the decoder is a completed dense depth map D̂. To compare the inconsistency between the groundtruth Dgt and the completed depth map, we use pixelwise L2 norm. Thus our reconstruction loss is Lrecon = ‖Dgt − D̂‖22. Also, since bottleneck features have limited expressiveness, if the sparsity of inputs is severe, e.g. only 0.1% sampled points of the whole resolution, the completed depth maps usually have griding effects. To resolve the griding effects, we introduce the smoothness term as in [36] into our loss function. Lsmooth = ‖∇2D̂‖1, where ∇2 denotes the second-order gradients. Our final total loss function with weights becomes Ltotal = L2D2CCA +wtLtrans +wrLrecon +wsLsmooth. (5) 4 Experiments 4.1 Dataset and Experiment Details Implementation details. We use PyTorch to implement the network. Our encoders are similar to VGG16, without the fully-connected layers. We use ReLU on extracted features after every SAConv operation. Downsampling is applied to both the features and masks in encoders. The transformer network is a 2-layer network, size 3×3, stride 1, and 512-dimension, with our SAConv. The decoder is also a VGG16-like network using deconvolution to upsample. We use SGD optimizer. We conclude all the hyperparameter tuning in the supplemental material. Datasets. We have done extensive experiments on outdoor scene datasets such as KITTI odometry dataset [12] and Cityscape depth dataset[37], and on indoor scene datasets such as NYUv2 [38] and SLAM RGBD datasets as ICL_NUM [39] and TUM [40]. • KITTI dataset. The KITTI dataset contains both RGB and LiDAR measurements, total 22 sequences for autonomous driving use. We use the official split, where 46K images are for training and 46K for testing. We adopt the same settings described in [4, 41] which drops the upper part of the images and resizes the images to 912×228. • Cityscape dataset. The Cityscape dataset contains RGB and depth maps calculated from stereo matching of outdoor scenes. We use the official training/validation dataset split. The training set contains 23K images from 41 sequences and the testing set contains 3 sequences. We center crop the images to the size of 900×335 to avoid the upper sky and lower car logo. • NYUv2 dataset. The NYUv2 dataset contains 464 sequences of indoor RGB and depth data using Kinect. We use the official dataset split and follow [4] to sample 50K images as training data. The testing data contains 654 images. • SLAM RGBD dataset. We use the sequences of ICL-NUIM[42] and TUM RGBD SLAM datasets from stereo camera. [40]. The former is synthetic, and the latter was acquired with Kinect. We use the same testing sequences as described in [1]. Sparsifiers. A sparsifier describes the strategy of sampling the dense/semi-dense depth maps in the dataset to make them become the sparse depth input for the training and evaluation purposes. We define three sparsifiers to simulate different sparse patterns existing in the real-world applications. Uniform sparsifier uniformly samples the dense depth map, simulating the scanning effect caused by LiDAR which is nearly uniform. Stereo sparsifier only samples the depth measurements on the edge or textured objects in the scene to simulate the sparse patterns generated by stereo matching or direct VSLAM. ORB sparsifier only maintains the depth measurements according to the location of ORB features in the corresponding RGB images. ORB sparsifier simulates the output sparse depth map from feature based VSLAM. We set a sample number for uniform and stereo sparsifiers to control the sparsity. Since the ORB feature number varies in different images, we do not predefine a sample number but take all the depth at the ORB feature positions. Error metrics. We use the error metrics the same as in most previous works. (1) RMSE: root mean square error (2) MAE: mean absolute error (3) δi: percentage of predicted pixels where the relative error is within 1.25i. Most of related works adopt i = 1,2,3. RMSE and MAE both measure error in meters in our experiments. Ablation studies. To examine the effectiveness of multi-modal approach, we evaluate the network performance using four types of inputs, i.e. (1) dense RGB images; (2) sparse depth; (3) dense RGB image + sparse depth; (4) complementary RGB image + sparse depth. The evaluation results are demonstrated in Table 1. We could observe that the networks with single-modal input perform worse than those with multi-modal input, which validates our multi-modal design. Besides, we observe that using dense RGB with sparse depth has similar but worse performance than using complementary RGB with sparse depth. The sparse depth inputs are precise. However, if we extract RGB domain features for the locations where we already have precise depth information, it would cause ambiguity thus the performance is worse than using complementary RGB information. We also conduct ablation studies for different loss combinations in our supplementary material on KITTI and NYUv2 dataset. Furthermore, we conduct the ablation study with different sparsity on NYUv2 dataset. The stereo sparsifier is used to sample from dense depth maps to generate sparse depth data for training and testing. We show how different sparsity can affect the predicted depth map quality. The results are in Table 2. 4.2 Outdoor scene - KITTI odometry and Cityscapes For KITTI and Cityscapes these two outdoor datasets, we use the uniform sparsifier. For the KITTI dataset, we sample 500 points as sparse depth the same as some previous works. We compare with some state-of-the-art works, [4, 43, 41, 44]. We follow the evaluation settings in these works, randomly choose 3000 images to calculate the numerical results. The results are in Table 3. Next, we conduct experiments using both KITTI and Cityscape datasets. Some monocular depth prediction works use Cityscape dataset for training and KITTI dataset for testing. We choose this setting and use 100 uniformly sampled sparse depth as inputs. The results are shown in Table 4. 4.3 Indoor scene - NYUv2 and SLAM RGBD datasets For NYUv2 indoor scene dataset, we use the stereo sparsifier to sample points. We compare to the state-of-the-art [4] with different sparsity using their publicly released code. The results are shown in Table 5. Next, we conduct experiments on SLAM RGBD dataset. We follow the setting in the state-of-the-art, CNN-SLAM [1], and do the cross-dataset evaluation. We train the model on NYUv2 using ORB sparsifier and evaluate on the SLAM RGBD dataset. We use the metric in CNN-SLAM, calculating the percentage of accurate estimations. Accurate estimations mean the error is within ±10% of the groundtruth. The results are in Table 6. 5 Conclusion In this paper, we directly analyze the relationship between the sparse depth information and their corresponding pixels in RGB images. To better fuse information, we propose 2D2CCA to ensure the most similar semantics are captured from two branches and use the complementary RGB information to complement the missing depth. Extensive experiments using total four outdoor/indoor scene datasets show our results achieve state-of-the-art.
1. What is the focus of the paper regarding depth completion? 2. What is the novelty of the proposed method compared to prior works? 3. How does the reviewer assess the quality and clarity of the paper's content? 4. Are there any suggestions or improvements that the reviewer has regarding the experimental results and configurations? 5. Does the reviewer have any questions regarding the paper, such as backpropagation or dataset usage?
Review
Review Update: After reading all the reviews and the feedback my rating stays the same. Note on visualizations: Please be aware that false colors (especially spectrum LUT) are problematic for people with color blindness. --- Summary: The paper is about depth completion and uses 2D canonical correlation analysis to learn the relationship between color images and depth maps for depth completion. Inputs for this method are a sparse depth map and a color image. Similar to previous methods like "Sparse and Dense Data with CNNs", Jaritz et al. 3DV 2018, the proposed method uses separate encoders for processing depth and color. However, the proposed method separates the color image into a sparse part (masked out where no depth is available) and a complimentary part (masked out where depth is available) and uses a third branch during training. The three encoder branches process the sparse depth map, the complimentary color image and the sparse color image. The last branch, which processes the sparse color image is required for applying the new CCA-based loss during training. The CCA-based loss forces the network to extract features from color images that are highly linearly correlated with features extracted from depth images and vice versa. Besides the CCA loss, the method uses a reconstruction loss on the final depth output and losses to enforce smoothness and numerical range. The architecture uses a binary mask to focus the network on processing the pixels with information, which is inspired by "Sparsity Invariant CNNs", Uhrig et al. 3DV 2017 and "Segmentation-aware Convolutional Networks using Local Attention Masks", Harley et al. ICCV 2017. Finally, the features from the complimentary color branch and depth branch is concatenated and decoded to a dense depth map. Originality: The idea to extend and use deep canonical correlation analysis for a depth completion framework is clearly novel and sets this work apart from previous methods. The network architecture itself is a reasonable combination of existing methods. The most notable changes are a cause of the CCA loss--the main contribution--which is good. The related work section on CCA provides many references and is especially helpful. Quality: Overall the paper does a good job to explain the method and support the claims with experiments. The experiments are diverse and cover multiple different datasets. However, I think some results could be less significant than how they are presented. For instance, the difference between using the complementary RGB image and the dense RGB image is quite small. Adding information about the variance would help the reader to assess the importance of the different input configurations. Further some additional information and evaluation of additional configurations could improve the understanding of the experiments. See Improvements for details. Clarity: There are some small issues with the writing that needs to be fixed but overall the paper reads well. The network for adapting the numerical range is just called "Transformer network", which is ambiguous. Giving a more specific name (maybe "range transform network") would avoid that. There are a few grammar issues. Significance: In my opinion it is likely that future approaches for this task will make use of the CCA-based loss. The good results support the idea presented in this work and prove its significance. I vote for accepting this work. Questions for the authors: Do you backpropagate to the parameters of the sparse depth branch from the 2D CCA loss? Which dataset was used for the results in Table 1? Were the networks trained for the specific input configuration in Table 1?
NIPS
Title Deep RGB-D Canonical Correlation Analysis For Sparse Depth Completion Abstract In this paper, we propose our Correlation For Completion Network (CFCNet), an end-to-end deep learning model that uses the correlation between two data sources to perform sparse depth completion. CFCNet learns to capture, to the largest extent, the semantically correlated features between RGB and depth information. Through pairs of image pixels and the visible measurements in a sparse depth map, CFCNet facilitates feature-level mutual transformation of different data sources. Such a transformation enables CFCNet to predict features and reconstruct data of missing depth measurements according to their corresponding, transformed RGB features. We extend canonical correlation analysis to a 2D domain and formulate it as one of our training objectives (i.e. 2d deep canonical correlation, or “2D2CCA loss"). Extensive experiments validate the ability and flexibility of our CFCNet compared to the state-of-the-art methods on both indoor and outdoor scenes with different reallife sparse patterns. Codes are available at: https://github.com/choyingw/CFCNet. 1 Introduction Depth measurements are widely used in computer vision applications [1, 2, 3]. However, most of the existing techniques for depth capture produce depth maps with incomplete data. For example, structured-light cameras cannot capture depth measurements where surfaces are too shiny; Visual Simultaneous Localization And Mappings (VSLAMs) are not able to recover depth of non-textured objects; LiDARs produce semi-dense depth map due to the limited scanlines and scanning frequency. Recently, researchers have introduced the sparse depth completion task, aiming to fill missing depth measurements using deep learning based methods [4, 5, 6, 7, 8, 9, 10]. Those studies produce dense depth maps by fusing features of sparse depth measurements and corresponding RGB images. However, they usually treat feature extraction of these two types of information as independent processes, which in reality turns the task they work on into "multi-modality depth prediction" rather than "depth completion." While the multi-modality depth prediction may produce dense outputs, they fail to fully utilize observable data. The depth completion task is unique in that part of its output is already observable in the input. Revealing the relationship between data pairs (i.e. between observable depth measurements and the corresponding image pixels) may help complete depth maps by emphasizing the information from image domain at the locations where the depth values are non-observable. ∗Both authors contributed equally to this work. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. To accomplish the depth completion task from a novel perspective, we propose an end-to-end deep learning based framework, Correlation For Completion Network (CFCNet). We view a completed dense depth map as composed of two parts. One is the sparse depth which is observable and used as the input, another is non-observable and recovered by the task. Also, the corresponding full RGB image of the depth map can be decomposed into two parts, one is called the sparse RGB, which holds the corresponding RGB values at the observable locations in the sparse depth. The other part is complementary RGB, which is the subtraction of the sparse RGB from the full RGB images. See Figure 2 for examples. During the training phase, CFCNet learns the relationship between sparse depth and sparse RGB and uses the learned knowledge to recover non-observable depth from complementary RGB. To learn the relationship between two modalities, we propose a 2D deep canonical correlation analysis (2D2CCA). In the proposed method, our 2D2CCA tries to learn non-linear projections where the projected features from RGB and depth domain are maximally correlated. Using 2D2CCA as an objective function, we could capture the semantically correlated features from the RGB and depth domain. In this fashion, we utilize the relationship of observable depth and its corresponding nonobservable locations of the RGB input. We then use the joint information learned from the input data pairs to output a dense depth map. The pipeline of our CFCNet is shown in Figure 2. Details of our method are described in Section 3. The main contributions of CFCNet can be summarized as follows. • Constructing a framework for the sparse depth completion task which leverages the relationship between sparse depth and its corresponding RGB image, using the complementary RGB information to complement the missing sparse depth information. • Proposing the 2D2CCA which forces feature encoders to extract the most similar semantics from multiple modalities. Our CFCNet is the first to apply the two-dimensional approach in CCA with deep learning studies. It overcomes the small sample size problem in other CCA based deep learning frameworks on modern computer vision tasks. • Achieving state-of-the-art of the depth completion on several datasets with a variety of sparse patterns that serve real-world settings. 2 Related Work Sparse Depth Completion is a task that targets at dense depth completion from sparse depth measurements and a corresponding RGB image. The nature of sparse depth measurements varies across scenarios and sensors. Sparse depth generated by the stereo method contains more information on object contours and less information on non-textured areas [11]. LiDAR sensors produce structured sparsity due to the scanning behavior [12]. Feature based SLAM systems (such as ORB SLAM [13]) only capture depth information at the positions of corresponding feature points. Besides these most popular three patterns, some other patterns have also been studied. For instance, [14] uses a line pattern to simulate partial observations from laser systems; [15] culls the depth data of shiny surfaces area out of the dense depth map to mimic commodity depth cameras’ output. [8] uses uniform grid patterns. The latter appears a simplified and artificial pattern. Real-life situations require a more practical tool. As for input sparsity, [4] stacks sparse depth maps and corresponding RGB images together to build a four-channel (RGB-D) input before fed into a ResNet based depth estimation network. This treatment produces better results than monocular depth estimation with only RGB images. Other studies involve a two-branch encoder-decoder based framework which is similar to those used in RGB-D segmentation tasks [9, 10, 16, 17]. Their approaches do not apply special treatments to the sparse depth branch. They work well on the dataset where sparsity is not extremely severe, e.g. KITTI depth completion benchmark [6]. In most of the two-branch frameworks, features from different sources are extracted independently and fused through direct concatenations or additions, or using features from RGB branch to provide an extra guidance to refine depth prediction results. Canonical Correlation Analysis is a standard statistical technique for learning the shared subspace across several original data spaces. For two modalities, from the shared subspace, each representation is the most predictive to the other representation and the most predictable by the other [18, 19]. To overcome the constraints of traditional CCA where the projections must be linear, deep canonical correlation analysis (DCCA) [20, 21] has been proposed. DCCA uses deep neural network to learn more complex non-linear projections between multiple modalities. CCA, DCCA, and other variants have been widely used on multi-modal representation learning problems [22, 23, 24, 25, 26, 27, 28]. The one-dimensional CCA method suffers from the singularity problem of covariance matrices in the case of high-dimensional space with small sample size (SSS). Existing works have extended CCA to a two-dimensional way to avoid the SSS problem. [29, 30, 31] use a similar approach to building full-rank covariance matrices inspired by 2DPCA [32] and 2DLDA [33] on the face recognition task. However, those studies do not approximate complex non-linear projections as [20, 21] attempt. Our CFCNet is the first to integrate two-dimensional CCA into deep learning frameworks to overcome the intrinsic problem of applying DCCA to modern computer vision tasks, detailed in Section 3.2. 3 Our Approach Our goal is to leverage the relationship of the sparse depth and their corresponding pixels in RGB images in order to optimize the performance of the depth completion task. We try to complement the missing depth components using cues from RGB domain. Since CCA could learn the shared subspace with its predictive characteristics, we estimate the missing depth component using features from RGB domain through CCA. However, traditional CCA has SSS problem in modern computer vision task, detailed in Section 3.2. We further propose the 2D2CCA to capture similar semantics from both RGB/depth encoders. After encoders learning the semantically similar features, we use a transformer network to transform features from RGB to depth domain. This design not only enables the reconstruction of missing depth features from complementary RGB information but also ensures semantics similarity and the same numerical range of the two data sources. Based on this structure, the decoder in CFCNet is capable of using the reconstructed depth features along with the observable depth features to recover the dense depth map. 3.1 Network Architecture Proposed CFCNet structure is in Figure 2. CFCNet takes in sparse depth map, sparse RGB, and complementary RGB. We use our Sparsity-aware Attentional Convolutions (SAConv, as shown in Figure 3) in VGG16-like encoders. SAConv is inspired by local attention mask [34]. Harley et al. [34] introduces the segmentation-aware mask to let convolution operators "focus" on the signals consistent with the segmentation mask. In order to propagate information from reliable sources, we use sparsity masks to make convolution operations attend on the signals from reliable locations. Difference of our SAConv and the local attention mask is that SAConv does not apply mask normalization. We avoid mask normalization because it affect the stability of our later 2D2CCA calculations due to the numerically small extracted features it produces after several times normalization. Also, similar to [6], we use maxpooling operation on masks after every SAConv to keep track of the visibility. If there is at least one nonzero value visible to a convolutional kernel, the maxpooling would evaluate the value at the position to 1. Most multi-modal deep learning approaches simply concatenate or elementwisely add bottleneck features. However, when the extracted semantics and range of feature value differs among elements, direct concatenation and addition on multi-modal data source would not always yield better performance than single-modal data source, as seen in [35, 17]. To avoid this problem. We use encoders to extract higher-level semantics from two branches. We propose 2D2CCA, detailed in 3.2, to ensure the extracted features from two branches are maximally correlated. The intuition is that we want to capture the same semantics from the RGB and depth domains. Next, we use a transformer network to transform extracted features from RGB domain to depth domain, making extracted features from different sources share the same numerical range. During the training phase, we use features of sparse depth and corresponding sparse RGB image to calculate the 2D2CCA loss and transformer loss. We use a symmetric decoder structure to decode the embedded features. For the input, we concatenate the sparse depth features with the reconstructed missing depth features. The reconstructed missing depth features are extracted from complementary RGB image through the RGB encoder and the transformer. To ensure single-stage training, we adopt weight-sharing strategies as shown in Figure 2. 3.2 2D Deep Canonical Correlation Analysis (2D2CCA) Existing CCA based techniques introduced in Section 2 have limitations in modern computer vision tasks. Since modern computer vision studies usually use very deep networks to extract information from images of relatively large resolution, the batch size is limited by GPU-memory use. Meanwhile, the latent feature representations in networks are high-dimensional, since the batch size is limited, using DCCA with one-dimensional vector representation would lead to SSS problem. Therefore, We propose a novel 2D deep canonical correlation analysis(2D2CCA) to overcome the limitations. We denote the completed depth map as D with its corresponding RGB image as I. Sparse depth map in the input and the corresponding sparse RGB image are denoted as sD and sI. RGB/Depth encoders are denoted as fI and fD where the parameters of the encoders are denoted as θI and θD respectively. As described in Section 3.1, fI and fD use the SAConv to propagate information from reliable points to extract features from sparse inputs. We generate 3D feature grids embedding pair ( FsD ∈ Rm×n×C, FsI ∈ Rm×n×C) for each sparse depth map/image pair (sD,sI) by defining FsD = fD (sD;θD) and FsI = fI (sI;θI). Inside each feature grid pair, there are C feature map pairs( FisD ∈ Rm×n,FisI ∈ Rm×n ) ,∀i <C, and C = 512 in our network. Rather than analyzing the global correlation between any possible pairs of (FisD,F j sI), ∀i 6= j, we analyze the channelwise canonical correlation between the same channel number ( FisD,F i sI ) . This channelwise correlation analysis will result in getting features with similar semantic meanings for each modality, as shown in Figure 6, which guides fI to embed more valuable information related to depth completion. Using 1-dimensional feature representation would lead to SSS problem in modern deep learning based computer vision task. We introduce the 2-dimensional approach similar to [32] to generate full-rank covariance matrix Σ̂sD,sI ∈ Rm×n, which is calculated as Σ̂sD,sI = 1 C C−1 ∑ i=0 [ FisD−E [FsD] ][ FisI−E [FsI ] ]T , (1) in which we define E [F ] = 1C ∑ C−1 i=0 F i. Besides, we generate covariance matrices Σ̂sD (and respective Σ̂sI) with the regularization constant r1 and identity matrix I as Σ̂sD = 1 C C−1 ∑ i=0 [ FisD−E [FsD] ][ FisD−E [FsD] ]T + r1I. (2) The correlation between FsD and FsI is calculated as corr(FsD,FsI) = ∥∥∥∥(Σ̂− 12sD )(Σ̂sD,sI)(Σ̂− 12sI )∥∥∥∥ tr . (3) The higher value of corr(FsD,FsI) represents the higher correlation between two feature blocks. Since corr(FsD,FsI) is an non-negative scalar, we use−corr(FsD,FsI) as the optimization objective to guide training of two feature encoders. To compute the gradient of corr(FsD,FsI) with respect to θD and θI , we can compute its gradient with respect to FsD and FsI and then do the back propagation. The detail is showed following. Regarding to the gradient computation, we define M = (Σ̂− 1 2 sD )(Σ̂sD,sI)(Σ̂ − 12 sI ) and decompose M as M = USVT using SVD decomposition. Then we define ∂corr(FsD,FsI) ∂FsI = 1 C (2∇sDsDFsD +∇sDsIFsI) , (4) where ∇sDsI = Σ̂ − 12 sD UV T Σ̂ − 12 sRGB and ∇sDsD = − 1 2 Σ̂ − 12 sD UDU T Σ̂ − 12 sD . ∂corr(FsD,FsI) ∂FsD follows the similar calculations as ∂corr(FsD,FsI)∂FsI in Equation (4). 3.3 Loss Function We denote our channelwise 2D2CCA loss as L2D2CCA =−corr(FsD,FsI). We denote the transformed component from sparse RGB to depth domain as F̂sD. The transformer loss describes the numerical similarity between RGB and depth domain. We use L2 norm to measure the numerical similarity. Our transformer loss is Ltrans = ‖FsD− F̂sD‖22. We also build another encoder and another transformer network which share weights with the encoder and transformer network for the spare RGB. The input of the encoder is a complementary RGB image. We use features extracted from complementary RGB image to predict features of non-observable depth using transformer network. For the complementary RGB image, we denote the extracted feature and transformed component as FcI and F̂cD. Later, we concatenate FsD and F̂cD, both of which are 512-channel. We got an 1024-channel bottleneck feature on depth domain. We pass this bottleneck feature into the decoder described in Section 3.1. The output from the decoder is a completed dense depth map D̂. To compare the inconsistency between the groundtruth Dgt and the completed depth map, we use pixelwise L2 norm. Thus our reconstruction loss is Lrecon = ‖Dgt − D̂‖22. Also, since bottleneck features have limited expressiveness, if the sparsity of inputs is severe, e.g. only 0.1% sampled points of the whole resolution, the completed depth maps usually have griding effects. To resolve the griding effects, we introduce the smoothness term as in [36] into our loss function. Lsmooth = ‖∇2D̂‖1, where ∇2 denotes the second-order gradients. Our final total loss function with weights becomes Ltotal = L2D2CCA +wtLtrans +wrLrecon +wsLsmooth. (5) 4 Experiments 4.1 Dataset and Experiment Details Implementation details. We use PyTorch to implement the network. Our encoders are similar to VGG16, without the fully-connected layers. We use ReLU on extracted features after every SAConv operation. Downsampling is applied to both the features and masks in encoders. The transformer network is a 2-layer network, size 3×3, stride 1, and 512-dimension, with our SAConv. The decoder is also a VGG16-like network using deconvolution to upsample. We use SGD optimizer. We conclude all the hyperparameter tuning in the supplemental material. Datasets. We have done extensive experiments on outdoor scene datasets such as KITTI odometry dataset [12] and Cityscape depth dataset[37], and on indoor scene datasets such as NYUv2 [38] and SLAM RGBD datasets as ICL_NUM [39] and TUM [40]. • KITTI dataset. The KITTI dataset contains both RGB and LiDAR measurements, total 22 sequences for autonomous driving use. We use the official split, where 46K images are for training and 46K for testing. We adopt the same settings described in [4, 41] which drops the upper part of the images and resizes the images to 912×228. • Cityscape dataset. The Cityscape dataset contains RGB and depth maps calculated from stereo matching of outdoor scenes. We use the official training/validation dataset split. The training set contains 23K images from 41 sequences and the testing set contains 3 sequences. We center crop the images to the size of 900×335 to avoid the upper sky and lower car logo. • NYUv2 dataset. The NYUv2 dataset contains 464 sequences of indoor RGB and depth data using Kinect. We use the official dataset split and follow [4] to sample 50K images as training data. The testing data contains 654 images. • SLAM RGBD dataset. We use the sequences of ICL-NUIM[42] and TUM RGBD SLAM datasets from stereo camera. [40]. The former is synthetic, and the latter was acquired with Kinect. We use the same testing sequences as described in [1]. Sparsifiers. A sparsifier describes the strategy of sampling the dense/semi-dense depth maps in the dataset to make them become the sparse depth input for the training and evaluation purposes. We define three sparsifiers to simulate different sparse patterns existing in the real-world applications. Uniform sparsifier uniformly samples the dense depth map, simulating the scanning effect caused by LiDAR which is nearly uniform. Stereo sparsifier only samples the depth measurements on the edge or textured objects in the scene to simulate the sparse patterns generated by stereo matching or direct VSLAM. ORB sparsifier only maintains the depth measurements according to the location of ORB features in the corresponding RGB images. ORB sparsifier simulates the output sparse depth map from feature based VSLAM. We set a sample number for uniform and stereo sparsifiers to control the sparsity. Since the ORB feature number varies in different images, we do not predefine a sample number but take all the depth at the ORB feature positions. Error metrics. We use the error metrics the same as in most previous works. (1) RMSE: root mean square error (2) MAE: mean absolute error (3) δi: percentage of predicted pixels where the relative error is within 1.25i. Most of related works adopt i = 1,2,3. RMSE and MAE both measure error in meters in our experiments. Ablation studies. To examine the effectiveness of multi-modal approach, we evaluate the network performance using four types of inputs, i.e. (1) dense RGB images; (2) sparse depth; (3) dense RGB image + sparse depth; (4) complementary RGB image + sparse depth. The evaluation results are demonstrated in Table 1. We could observe that the networks with single-modal input perform worse than those with multi-modal input, which validates our multi-modal design. Besides, we observe that using dense RGB with sparse depth has similar but worse performance than using complementary RGB with sparse depth. The sparse depth inputs are precise. However, if we extract RGB domain features for the locations where we already have precise depth information, it would cause ambiguity thus the performance is worse than using complementary RGB information. We also conduct ablation studies for different loss combinations in our supplementary material on KITTI and NYUv2 dataset. Furthermore, we conduct the ablation study with different sparsity on NYUv2 dataset. The stereo sparsifier is used to sample from dense depth maps to generate sparse depth data for training and testing. We show how different sparsity can affect the predicted depth map quality. The results are in Table 2. 4.2 Outdoor scene - KITTI odometry and Cityscapes For KITTI and Cityscapes these two outdoor datasets, we use the uniform sparsifier. For the KITTI dataset, we sample 500 points as sparse depth the same as some previous works. We compare with some state-of-the-art works, [4, 43, 41, 44]. We follow the evaluation settings in these works, randomly choose 3000 images to calculate the numerical results. The results are in Table 3. Next, we conduct experiments using both KITTI and Cityscape datasets. Some monocular depth prediction works use Cityscape dataset for training and KITTI dataset for testing. We choose this setting and use 100 uniformly sampled sparse depth as inputs. The results are shown in Table 4. 4.3 Indoor scene - NYUv2 and SLAM RGBD datasets For NYUv2 indoor scene dataset, we use the stereo sparsifier to sample points. We compare to the state-of-the-art [4] with different sparsity using their publicly released code. The results are shown in Table 5. Next, we conduct experiments on SLAM RGBD dataset. We follow the setting in the state-of-the-art, CNN-SLAM [1], and do the cross-dataset evaluation. We train the model on NYUv2 using ORB sparsifier and evaluate on the SLAM RGBD dataset. We use the metric in CNN-SLAM, calculating the percentage of accurate estimations. Accurate estimations mean the error is within ±10% of the groundtruth. The results are in Table 6. 5 Conclusion In this paper, we directly analyze the relationship between the sparse depth information and their corresponding pixels in RGB images. To better fuse information, we propose 2D2CCA to ensure the most similar semantics are captured from two branches and use the complementary RGB information to complement the missing depth. Extensive experiments using total four outdoor/indoor scene datasets show our results achieve state-of-the-art.
1. What is the novelty of the proposed approach in the paper? 2. What are the strengths and weaknesses of the paper's execution and clarity? 3. How does the reviewer assess the significance and impact of the proposed method? 4. Are there any concerns or missing information regarding the comparisons with previous works?
Review
Review Originality: Somewhat. Using SSA-based loss in combination with a new model architecture is perhaps new but it is yet another combination of existing building blocks. The paper misses an important previous work: [P1] Yinda Zhang, Thomas Funkhouser. Deep Depth Completion of a Single RGB-D Image. CVPR 2018 which should be reviewed and included into the comparison. Quality: The paper execution is reasonable. I would not be able to reproduce it without the source code and data but that is common situation with many DL papers. Clarity: I found the paper quite hard to read, in particular section 3.2 and graphs and tables in Experimental part. Many tables (eg 1,2,3,4,5)have no units. False color visualizations, eg Fig 6, are only very qualitative and do allow to really compare performances of the methods. Significance: Medium. The method seems to work somewhat better than other methods but is not compared with all very relevant methods. The model is a combination of existing elements but reasonably engineered.
NIPS
Title Deep RGB-D Canonical Correlation Analysis For Sparse Depth Completion Abstract In this paper, we propose our Correlation For Completion Network (CFCNet), an end-to-end deep learning model that uses the correlation between two data sources to perform sparse depth completion. CFCNet learns to capture, to the largest extent, the semantically correlated features between RGB and depth information. Through pairs of image pixels and the visible measurements in a sparse depth map, CFCNet facilitates feature-level mutual transformation of different data sources. Such a transformation enables CFCNet to predict features and reconstruct data of missing depth measurements according to their corresponding, transformed RGB features. We extend canonical correlation analysis to a 2D domain and formulate it as one of our training objectives (i.e. 2d deep canonical correlation, or “2D2CCA loss"). Extensive experiments validate the ability and flexibility of our CFCNet compared to the state-of-the-art methods on both indoor and outdoor scenes with different reallife sparse patterns. Codes are available at: https://github.com/choyingw/CFCNet. 1 Introduction Depth measurements are widely used in computer vision applications [1, 2, 3]. However, most of the existing techniques for depth capture produce depth maps with incomplete data. For example, structured-light cameras cannot capture depth measurements where surfaces are too shiny; Visual Simultaneous Localization And Mappings (VSLAMs) are not able to recover depth of non-textured objects; LiDARs produce semi-dense depth map due to the limited scanlines and scanning frequency. Recently, researchers have introduced the sparse depth completion task, aiming to fill missing depth measurements using deep learning based methods [4, 5, 6, 7, 8, 9, 10]. Those studies produce dense depth maps by fusing features of sparse depth measurements and corresponding RGB images. However, they usually treat feature extraction of these two types of information as independent processes, which in reality turns the task they work on into "multi-modality depth prediction" rather than "depth completion." While the multi-modality depth prediction may produce dense outputs, they fail to fully utilize observable data. The depth completion task is unique in that part of its output is already observable in the input. Revealing the relationship between data pairs (i.e. between observable depth measurements and the corresponding image pixels) may help complete depth maps by emphasizing the information from image domain at the locations where the depth values are non-observable. ∗Both authors contributed equally to this work. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. To accomplish the depth completion task from a novel perspective, we propose an end-to-end deep learning based framework, Correlation For Completion Network (CFCNet). We view a completed dense depth map as composed of two parts. One is the sparse depth which is observable and used as the input, another is non-observable and recovered by the task. Also, the corresponding full RGB image of the depth map can be decomposed into two parts, one is called the sparse RGB, which holds the corresponding RGB values at the observable locations in the sparse depth. The other part is complementary RGB, which is the subtraction of the sparse RGB from the full RGB images. See Figure 2 for examples. During the training phase, CFCNet learns the relationship between sparse depth and sparse RGB and uses the learned knowledge to recover non-observable depth from complementary RGB. To learn the relationship between two modalities, we propose a 2D deep canonical correlation analysis (2D2CCA). In the proposed method, our 2D2CCA tries to learn non-linear projections where the projected features from RGB and depth domain are maximally correlated. Using 2D2CCA as an objective function, we could capture the semantically correlated features from the RGB and depth domain. In this fashion, we utilize the relationship of observable depth and its corresponding nonobservable locations of the RGB input. We then use the joint information learned from the input data pairs to output a dense depth map. The pipeline of our CFCNet is shown in Figure 2. Details of our method are described in Section 3. The main contributions of CFCNet can be summarized as follows. • Constructing a framework for the sparse depth completion task which leverages the relationship between sparse depth and its corresponding RGB image, using the complementary RGB information to complement the missing sparse depth information. • Proposing the 2D2CCA which forces feature encoders to extract the most similar semantics from multiple modalities. Our CFCNet is the first to apply the two-dimensional approach in CCA with deep learning studies. It overcomes the small sample size problem in other CCA based deep learning frameworks on modern computer vision tasks. • Achieving state-of-the-art of the depth completion on several datasets with a variety of sparse patterns that serve real-world settings. 2 Related Work Sparse Depth Completion is a task that targets at dense depth completion from sparse depth measurements and a corresponding RGB image. The nature of sparse depth measurements varies across scenarios and sensors. Sparse depth generated by the stereo method contains more information on object contours and less information on non-textured areas [11]. LiDAR sensors produce structured sparsity due to the scanning behavior [12]. Feature based SLAM systems (such as ORB SLAM [13]) only capture depth information at the positions of corresponding feature points. Besides these most popular three patterns, some other patterns have also been studied. For instance, [14] uses a line pattern to simulate partial observations from laser systems; [15] culls the depth data of shiny surfaces area out of the dense depth map to mimic commodity depth cameras’ output. [8] uses uniform grid patterns. The latter appears a simplified and artificial pattern. Real-life situations require a more practical tool. As for input sparsity, [4] stacks sparse depth maps and corresponding RGB images together to build a four-channel (RGB-D) input before fed into a ResNet based depth estimation network. This treatment produces better results than monocular depth estimation with only RGB images. Other studies involve a two-branch encoder-decoder based framework which is similar to those used in RGB-D segmentation tasks [9, 10, 16, 17]. Their approaches do not apply special treatments to the sparse depth branch. They work well on the dataset where sparsity is not extremely severe, e.g. KITTI depth completion benchmark [6]. In most of the two-branch frameworks, features from different sources are extracted independently and fused through direct concatenations or additions, or using features from RGB branch to provide an extra guidance to refine depth prediction results. Canonical Correlation Analysis is a standard statistical technique for learning the shared subspace across several original data spaces. For two modalities, from the shared subspace, each representation is the most predictive to the other representation and the most predictable by the other [18, 19]. To overcome the constraints of traditional CCA where the projections must be linear, deep canonical correlation analysis (DCCA) [20, 21] has been proposed. DCCA uses deep neural network to learn more complex non-linear projections between multiple modalities. CCA, DCCA, and other variants have been widely used on multi-modal representation learning problems [22, 23, 24, 25, 26, 27, 28]. The one-dimensional CCA method suffers from the singularity problem of covariance matrices in the case of high-dimensional space with small sample size (SSS). Existing works have extended CCA to a two-dimensional way to avoid the SSS problem. [29, 30, 31] use a similar approach to building full-rank covariance matrices inspired by 2DPCA [32] and 2DLDA [33] on the face recognition task. However, those studies do not approximate complex non-linear projections as [20, 21] attempt. Our CFCNet is the first to integrate two-dimensional CCA into deep learning frameworks to overcome the intrinsic problem of applying DCCA to modern computer vision tasks, detailed in Section 3.2. 3 Our Approach Our goal is to leverage the relationship of the sparse depth and their corresponding pixels in RGB images in order to optimize the performance of the depth completion task. We try to complement the missing depth components using cues from RGB domain. Since CCA could learn the shared subspace with its predictive characteristics, we estimate the missing depth component using features from RGB domain through CCA. However, traditional CCA has SSS problem in modern computer vision task, detailed in Section 3.2. We further propose the 2D2CCA to capture similar semantics from both RGB/depth encoders. After encoders learning the semantically similar features, we use a transformer network to transform features from RGB to depth domain. This design not only enables the reconstruction of missing depth features from complementary RGB information but also ensures semantics similarity and the same numerical range of the two data sources. Based on this structure, the decoder in CFCNet is capable of using the reconstructed depth features along with the observable depth features to recover the dense depth map. 3.1 Network Architecture Proposed CFCNet structure is in Figure 2. CFCNet takes in sparse depth map, sparse RGB, and complementary RGB. We use our Sparsity-aware Attentional Convolutions (SAConv, as shown in Figure 3) in VGG16-like encoders. SAConv is inspired by local attention mask [34]. Harley et al. [34] introduces the segmentation-aware mask to let convolution operators "focus" on the signals consistent with the segmentation mask. In order to propagate information from reliable sources, we use sparsity masks to make convolution operations attend on the signals from reliable locations. Difference of our SAConv and the local attention mask is that SAConv does not apply mask normalization. We avoid mask normalization because it affect the stability of our later 2D2CCA calculations due to the numerically small extracted features it produces after several times normalization. Also, similar to [6], we use maxpooling operation on masks after every SAConv to keep track of the visibility. If there is at least one nonzero value visible to a convolutional kernel, the maxpooling would evaluate the value at the position to 1. Most multi-modal deep learning approaches simply concatenate or elementwisely add bottleneck features. However, when the extracted semantics and range of feature value differs among elements, direct concatenation and addition on multi-modal data source would not always yield better performance than single-modal data source, as seen in [35, 17]. To avoid this problem. We use encoders to extract higher-level semantics from two branches. We propose 2D2CCA, detailed in 3.2, to ensure the extracted features from two branches are maximally correlated. The intuition is that we want to capture the same semantics from the RGB and depth domains. Next, we use a transformer network to transform extracted features from RGB domain to depth domain, making extracted features from different sources share the same numerical range. During the training phase, we use features of sparse depth and corresponding sparse RGB image to calculate the 2D2CCA loss and transformer loss. We use a symmetric decoder structure to decode the embedded features. For the input, we concatenate the sparse depth features with the reconstructed missing depth features. The reconstructed missing depth features are extracted from complementary RGB image through the RGB encoder and the transformer. To ensure single-stage training, we adopt weight-sharing strategies as shown in Figure 2. 3.2 2D Deep Canonical Correlation Analysis (2D2CCA) Existing CCA based techniques introduced in Section 2 have limitations in modern computer vision tasks. Since modern computer vision studies usually use very deep networks to extract information from images of relatively large resolution, the batch size is limited by GPU-memory use. Meanwhile, the latent feature representations in networks are high-dimensional, since the batch size is limited, using DCCA with one-dimensional vector representation would lead to SSS problem. Therefore, We propose a novel 2D deep canonical correlation analysis(2D2CCA) to overcome the limitations. We denote the completed depth map as D with its corresponding RGB image as I. Sparse depth map in the input and the corresponding sparse RGB image are denoted as sD and sI. RGB/Depth encoders are denoted as fI and fD where the parameters of the encoders are denoted as θI and θD respectively. As described in Section 3.1, fI and fD use the SAConv to propagate information from reliable points to extract features from sparse inputs. We generate 3D feature grids embedding pair ( FsD ∈ Rm×n×C, FsI ∈ Rm×n×C) for each sparse depth map/image pair (sD,sI) by defining FsD = fD (sD;θD) and FsI = fI (sI;θI). Inside each feature grid pair, there are C feature map pairs( FisD ∈ Rm×n,FisI ∈ Rm×n ) ,∀i <C, and C = 512 in our network. Rather than analyzing the global correlation between any possible pairs of (FisD,F j sI), ∀i 6= j, we analyze the channelwise canonical correlation between the same channel number ( FisD,F i sI ) . This channelwise correlation analysis will result in getting features with similar semantic meanings for each modality, as shown in Figure 6, which guides fI to embed more valuable information related to depth completion. Using 1-dimensional feature representation would lead to SSS problem in modern deep learning based computer vision task. We introduce the 2-dimensional approach similar to [32] to generate full-rank covariance matrix Σ̂sD,sI ∈ Rm×n, which is calculated as Σ̂sD,sI = 1 C C−1 ∑ i=0 [ FisD−E [FsD] ][ FisI−E [FsI ] ]T , (1) in which we define E [F ] = 1C ∑ C−1 i=0 F i. Besides, we generate covariance matrices Σ̂sD (and respective Σ̂sI) with the regularization constant r1 and identity matrix I as Σ̂sD = 1 C C−1 ∑ i=0 [ FisD−E [FsD] ][ FisD−E [FsD] ]T + r1I. (2) The correlation between FsD and FsI is calculated as corr(FsD,FsI) = ∥∥∥∥(Σ̂− 12sD )(Σ̂sD,sI)(Σ̂− 12sI )∥∥∥∥ tr . (3) The higher value of corr(FsD,FsI) represents the higher correlation between two feature blocks. Since corr(FsD,FsI) is an non-negative scalar, we use−corr(FsD,FsI) as the optimization objective to guide training of two feature encoders. To compute the gradient of corr(FsD,FsI) with respect to θD and θI , we can compute its gradient with respect to FsD and FsI and then do the back propagation. The detail is showed following. Regarding to the gradient computation, we define M = (Σ̂− 1 2 sD )(Σ̂sD,sI)(Σ̂ − 12 sI ) and decompose M as M = USVT using SVD decomposition. Then we define ∂corr(FsD,FsI) ∂FsI = 1 C (2∇sDsDFsD +∇sDsIFsI) , (4) where ∇sDsI = Σ̂ − 12 sD UV T Σ̂ − 12 sRGB and ∇sDsD = − 1 2 Σ̂ − 12 sD UDU T Σ̂ − 12 sD . ∂corr(FsD,FsI) ∂FsD follows the similar calculations as ∂corr(FsD,FsI)∂FsI in Equation (4). 3.3 Loss Function We denote our channelwise 2D2CCA loss as L2D2CCA =−corr(FsD,FsI). We denote the transformed component from sparse RGB to depth domain as F̂sD. The transformer loss describes the numerical similarity between RGB and depth domain. We use L2 norm to measure the numerical similarity. Our transformer loss is Ltrans = ‖FsD− F̂sD‖22. We also build another encoder and another transformer network which share weights with the encoder and transformer network for the spare RGB. The input of the encoder is a complementary RGB image. We use features extracted from complementary RGB image to predict features of non-observable depth using transformer network. For the complementary RGB image, we denote the extracted feature and transformed component as FcI and F̂cD. Later, we concatenate FsD and F̂cD, both of which are 512-channel. We got an 1024-channel bottleneck feature on depth domain. We pass this bottleneck feature into the decoder described in Section 3.1. The output from the decoder is a completed dense depth map D̂. To compare the inconsistency between the groundtruth Dgt and the completed depth map, we use pixelwise L2 norm. Thus our reconstruction loss is Lrecon = ‖Dgt − D̂‖22. Also, since bottleneck features have limited expressiveness, if the sparsity of inputs is severe, e.g. only 0.1% sampled points of the whole resolution, the completed depth maps usually have griding effects. To resolve the griding effects, we introduce the smoothness term as in [36] into our loss function. Lsmooth = ‖∇2D̂‖1, where ∇2 denotes the second-order gradients. Our final total loss function with weights becomes Ltotal = L2D2CCA +wtLtrans +wrLrecon +wsLsmooth. (5) 4 Experiments 4.1 Dataset and Experiment Details Implementation details. We use PyTorch to implement the network. Our encoders are similar to VGG16, without the fully-connected layers. We use ReLU on extracted features after every SAConv operation. Downsampling is applied to both the features and masks in encoders. The transformer network is a 2-layer network, size 3×3, stride 1, and 512-dimension, with our SAConv. The decoder is also a VGG16-like network using deconvolution to upsample. We use SGD optimizer. We conclude all the hyperparameter tuning in the supplemental material. Datasets. We have done extensive experiments on outdoor scene datasets such as KITTI odometry dataset [12] and Cityscape depth dataset[37], and on indoor scene datasets such as NYUv2 [38] and SLAM RGBD datasets as ICL_NUM [39] and TUM [40]. • KITTI dataset. The KITTI dataset contains both RGB and LiDAR measurements, total 22 sequences for autonomous driving use. We use the official split, where 46K images are for training and 46K for testing. We adopt the same settings described in [4, 41] which drops the upper part of the images and resizes the images to 912×228. • Cityscape dataset. The Cityscape dataset contains RGB and depth maps calculated from stereo matching of outdoor scenes. We use the official training/validation dataset split. The training set contains 23K images from 41 sequences and the testing set contains 3 sequences. We center crop the images to the size of 900×335 to avoid the upper sky and lower car logo. • NYUv2 dataset. The NYUv2 dataset contains 464 sequences of indoor RGB and depth data using Kinect. We use the official dataset split and follow [4] to sample 50K images as training data. The testing data contains 654 images. • SLAM RGBD dataset. We use the sequences of ICL-NUIM[42] and TUM RGBD SLAM datasets from stereo camera. [40]. The former is synthetic, and the latter was acquired with Kinect. We use the same testing sequences as described in [1]. Sparsifiers. A sparsifier describes the strategy of sampling the dense/semi-dense depth maps in the dataset to make them become the sparse depth input for the training and evaluation purposes. We define three sparsifiers to simulate different sparse patterns existing in the real-world applications. Uniform sparsifier uniformly samples the dense depth map, simulating the scanning effect caused by LiDAR which is nearly uniform. Stereo sparsifier only samples the depth measurements on the edge or textured objects in the scene to simulate the sparse patterns generated by stereo matching or direct VSLAM. ORB sparsifier only maintains the depth measurements according to the location of ORB features in the corresponding RGB images. ORB sparsifier simulates the output sparse depth map from feature based VSLAM. We set a sample number for uniform and stereo sparsifiers to control the sparsity. Since the ORB feature number varies in different images, we do not predefine a sample number but take all the depth at the ORB feature positions. Error metrics. We use the error metrics the same as in most previous works. (1) RMSE: root mean square error (2) MAE: mean absolute error (3) δi: percentage of predicted pixels where the relative error is within 1.25i. Most of related works adopt i = 1,2,3. RMSE and MAE both measure error in meters in our experiments. Ablation studies. To examine the effectiveness of multi-modal approach, we evaluate the network performance using four types of inputs, i.e. (1) dense RGB images; (2) sparse depth; (3) dense RGB image + sparse depth; (4) complementary RGB image + sparse depth. The evaluation results are demonstrated in Table 1. We could observe that the networks with single-modal input perform worse than those with multi-modal input, which validates our multi-modal design. Besides, we observe that using dense RGB with sparse depth has similar but worse performance than using complementary RGB with sparse depth. The sparse depth inputs are precise. However, if we extract RGB domain features for the locations where we already have precise depth information, it would cause ambiguity thus the performance is worse than using complementary RGB information. We also conduct ablation studies for different loss combinations in our supplementary material on KITTI and NYUv2 dataset. Furthermore, we conduct the ablation study with different sparsity on NYUv2 dataset. The stereo sparsifier is used to sample from dense depth maps to generate sparse depth data for training and testing. We show how different sparsity can affect the predicted depth map quality. The results are in Table 2. 4.2 Outdoor scene - KITTI odometry and Cityscapes For KITTI and Cityscapes these two outdoor datasets, we use the uniform sparsifier. For the KITTI dataset, we sample 500 points as sparse depth the same as some previous works. We compare with some state-of-the-art works, [4, 43, 41, 44]. We follow the evaluation settings in these works, randomly choose 3000 images to calculate the numerical results. The results are in Table 3. Next, we conduct experiments using both KITTI and Cityscape datasets. Some monocular depth prediction works use Cityscape dataset for training and KITTI dataset for testing. We choose this setting and use 100 uniformly sampled sparse depth as inputs. The results are shown in Table 4. 4.3 Indoor scene - NYUv2 and SLAM RGBD datasets For NYUv2 indoor scene dataset, we use the stereo sparsifier to sample points. We compare to the state-of-the-art [4] with different sparsity using their publicly released code. The results are shown in Table 5. Next, we conduct experiments on SLAM RGBD dataset. We follow the setting in the state-of-the-art, CNN-SLAM [1], and do the cross-dataset evaluation. We train the model on NYUv2 using ORB sparsifier and evaluate on the SLAM RGBD dataset. We use the metric in CNN-SLAM, calculating the percentage of accurate estimations. Accurate estimations mean the error is within ±10% of the groundtruth. The results are in Table 6. 5 Conclusion In this paper, we directly analyze the relationship between the sparse depth information and their corresponding pixels in RGB images. To better fuse information, we propose 2D2CCA to ensure the most similar semantics are captured from two branches and use the complementary RGB information to complement the missing depth. Extensive experiments using total four outdoor/indoor scene datasets show our results achieve state-of-the-art.
1. What is the focus of the paper regarding depth completion? 2. What are the strengths of the proposed approach, particularly its novelty and performance? 3. Do you have any minor issues or suggestions regarding the paper's content or presentation?
Review
Review Summary This paper proposes CFCNet, a method to conduct sparse depth completion from sparse depth and the corresponding RGB, leveraging the relationship between them. The authors first demonstrate the structure of CFCNet. The basic idea is to use convnets to extract features from corresponding RGB and depth, minimizing the channelwise 2D^2CCA loss. A transformer network is introduced to map RGB features to depth features, and it is then used to transform the RGB image into depth. The authors then introduce 2D^2CCA, which is an extension of CCA when the input size is of high-dimensional. Finally, experiments on a bunch of datasets (both indoor and outdoor scenes) show that the proposed method achieves the state-of-the-art performance. Strengths -The depth completion task is important, and the paper provides a method with good performance. -The method is novel. The authors map RGB and depth to two latent spaces where they are highly correlated, so that mapping between the two latent space becomes easier. -The visual results look impressive. As shown in figure 6 and the supplementary, even when testing on real images, results from CFCNet looks impressive, contains much more details compared to previous methods. Minor issues -There are many “directional”s in this paper, e.g., two-directional, and I still do not understand what they mean. -May reorganize all the notations. For example, F_{s_I} is introduced in line 148 but not in figure 2. Putting the notations into figure 2 might help readers to understand section 3.2 easier. Comments after the rebuttal ************************************ Thank the authors for the rebuttal. The visualized results look great and still have space for improvement. I agree with two other reviewers that this is overall a good paper, and my overall score remains the same.Â
NIPS
Title Approximate maximum entropy principles via Goemans-Williamson with applications to provable variational methods Abstract The well known maximum-entropy principle due to Jaynes, which states that given mean parameters, the maximum entropy distribution matching them is in an exponential family has been very popular in machine learning due to its “Occam’s razor” interpretation. Unfortunately, calculating the potentials in the maximumentropy distribution is intractable [BGS14]. We provide computationally efficient versions of this principle when the mean parameters are pairwise moments: we design distributions that approximately match given pairwise moments, while having entropy which is comparable to the maximum entropy distribution matching those moments. We additionally provide surprising applications of the approximate maximum entropy principle to designing provable variational methods for partition function calculations for Ising models without any assumptions on the potentials of the model. More precisely, we show that we can get approximation guarantees for the log-partition function comparable to those in the low-temperature limit, which is the setting of optimization of quadratic forms over the hypercube. ([AN06]) 1 Introduction Maximum entropy principle The maximum entropy principle [Jay57] states that given mean parameters, i.e. Eµ[φt(x)] for a family of functionals φt(x), t ∈ [1, T ], where µ is distribution over the hypercube {−1, 1}n, the entropy-maximizing distribution µ is an exponential family distribution, i.e. µ(x) ∝ exp( ∑T t=1 Jtφt(x)) for some potentials Jt, t ∈ [1, T ]. 1 This principle has been one of the reasons for the popularity of graphical models in machine learning: the “maximum entropy” assumption is interpreted as “minimal assumptions” on the distribution other than what is known about it. However, this principle is problematic from a computational point of view. Due to results of [BGS14, SV14], the potentials Jt of the Ising model, in many cases, are impossible to estimate well in polynomial time, unless NP = RP – so merely getting the description of the maximum entropy distribution is already hard. Moreover, in order to extract useful information about this distribution, usually we would also like to at least be able to sample efficiently from this distribution – which is typically NP-hard or even #P-hard. 1There is a more general way to state this principle over an arbitrary domain, not just the hypercube, but for clarity in this paper we will focus on the hypercube only. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. In this paper we address this problem in certain cases. We provide a “bi-criteria” approximation for the special case where the functionals φt(x) are φi,j(x) = xixj , i.e. pairwise moments: we produce a efficiently sampleable distribution over the hypercube which matches these moments up to multiplicative constant factors, and has entropy at most a constant factor smaller from from the entropy of the maximum entropy distribution. 2 Furthermore, the distribution which achieves this is very natural: the sign of a multivariate normal variable. This provides theoretical explanation for the phenomenon observed by the computational neuroscience community [BB07] that this distribution (there named dichotomized Gaussian there) has near-maximum entropy. Variational methods The above results also allow us to get results for a seemingly unrelated problem – approximating the partition function Z = ∑ x∈{−1,1}n exp( ∑T t=1 Jtφt(x)) of a member of an exponential family. The reason this task is important is that it is tied to calculating marginals. One of the ways this task is solved is variational methods: namely, expressing logZ as an optimization problem. While there is a plethora of work on variational methods, of many flavors (mean field, Bethe/Kikuchi relaxations, TRBP, etc. for a survey, see [WJ08]), they typically come either with no guarantees, or with guarantees in very constrained cases (e.g. loopless graphs; graphs with large girth, etc. [WJW03, WJW05]). While this is a rich area of research, the following extremely basic research question has not been answered: What is the best approximation guarantee on the partition function in the worst case (with no additional assumptions on the potentials)? In the low-temperature limit, i.e. when |Jt| → ∞, logZ → maxx∈{−1,1}n ∑T t=1 Jtφt(x) - i.e. the question reduces to purely optimization. In this regime, this question has very satisfying answers for many families φt(x). One classical example is when the functionals are φi,j(x) = xixj . In the graphical model community, these are known as Ising models, and in the optimization community this is the problem of optimizing quadratic forms and has been studied by [CW04, AN06, AMMN06]. In the optimization version, the previous papers showed that in the worst case, one can get O(log n) factor multiplicative factor approximation of it, and that unless P = NP, one cannot get better than constant factor approximations of it. In the finite-temperature version, it is known that it is NP-hard to achieve a 1 + factor approximation to the partition function (i.e. construct a FPRAS) [SS12], but nothing is known about coarser approximations. We prove in this paper, informally, that one can get comparable multiplicative guarantees on the log-partition function in the finite temperature case as well – using the tools and insights we develop on the maximum entropy principles. Our methods are extremely generic, and likely to apply to many other exponential families, where algorithms based on linear/semidefinite programming relaxations are known to give good guarantees in the optimization regime. 2 Statements of results and prior work Approximate maximum entropy The main theorem in this section is the following one. Theorem 2.1. For any covariance matrix Σ of a centered distribution µ : {−1, 1}n → R, i.e. Eµ[xixj ] = Σi,j , Eµ[xi] = 0, there is an efficiently sampleable distribution µ̃, which can be sampled as sign(g), where g ∼ N (0,Σ + βI) and satisfies G 1 + β Σi,j ≤ Eµ̃[XiXj ] ≤ 1 1 + β Σi,j and has entropy H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β , for any β ≥ 1 31/2 . There are two prior works on computational issues relating to maximum entropy principles, both proving hardness results. [BGS14] considers the “hard-core” model where the functionals φt are such that the distribution µ(x) puts zero mass on configurations x which are not independent sets with respect to some graph G. 2In fact, we produce a distribution with entropy Ω(n), which implies the latter claim since the maximum entropy of any distribution of over {−1, 1}n is at most n They show that unless NP = RP, there is no FPRAS for calculating the potentials Jt, given the mean parameters Eµ[φt(x)]. [SV14] prove an equivalence between calculating the mean parameters and calculating partition functions. More precisely, they show that given an oracle that can calculate the mean parameters up to a (1 + ) multiplicative factor in time O(poly(1/ )), one can calculate the partition function of the same exponential family up to (1 +O(poly( ))) multiplicative factor, in time O(poly(1/ )). Note, the in this work potentially needs to be polynomially small in n (i.e. an oracle that can calculate the mean parameters to a fixed multiplicative constant cannot be used.) Both results prove hardness for fine-grained approximations to the maximum entropy principle, and ask for outputting approximations to the mean parameters. Our result circumvents these hardness results by providing a distribution which is not in the maximum-entropy exponential family, and is allowed to only approximately match the moments as well. To the best of our knowledge, such an approximation, while very natural, has not been considered in the literature. Provable variational methods The main theorems in this section will concern the approximation factor that can be achieved by degree-2 pseudo-moment relaxations of the standard variational principle due to Gibbs. ([Ell12]) As outlined before, we will be concerned with a particularly popular exponential family: Ising models. We will prove the following three results: Theorem 2.2 (Ferromagnetic Ising, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor 50 the value of logZ where Z is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j Ji,jxixj) for Ji,j > 0. Theorem 2.3 (Ising model, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor O(log n) the value of logZ where Z is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j Ji,jxixj). Theorem 2.4 (Ising model, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor O(logχ(G)) the value of logZ whereZ is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j∈E(G) Ji,jxixj) where G = (V (G), E(G)) is a graph with chromatic number χ(G). 3 While a lot of work is done on variational methods in general (see the survey by [WJ08] for a detailed overview), to the best of our knowledge nothing is known about the worst-case guarantee that we are interested in here. Moreover, other than a recent paper by [Ris16], no other work has provided provable bounds for variational methods that proceed via a convex relaxation and a rounding thereof.4 [Ris16] provides guarantees in the case of Ising models that are also based on pseudo-moment relaxations of the variational principle, albeit only in the special case when the graph is “dense” in a suitably defined sense. 5 The results there are very specific to the density assumption and can not be adapted to our worst-case setting. Finally, we mention that in the special case of the ferromagnetic Ising models, an algorithm based on MCMC was provided by [JS93], which can give an approximation factor of (1 + ) to the partition function and runs in time O(n11poly(1/ )). In spite of this, the focus of this part of our paper is to provide understanding of variational methods in certain cases, as they continue to be popular in practice for their faster running time compared to MCMC-based methods but are theoretically much more poorly studied. 3Theorem 2.4 is strictly more general than Theorem 2.3, however the proof of Theorem 2.3 uses less heavy machinery and is illuminating enough that we feel merits being presented as a separate theorem. 4In some sense, it is possible to give provable bounds for Bethe-entropy based relaxations, via analyzing belief propagation directly, which has been done in cases where there is correlation decay and the graph is locally tree-like. [WJ08] has a detailed overview of such results. 5More precisely, they prove that in the case when ∀i, j,∆|Ji,j | ≤ ∆n2 ∑ i,j |Ji,j |, one can get an additive ( ∑ i,j Ji,j) approximation to logZ in time n O( ∆ 2 ). 3 Approximate maximum entropy principles Let us recall what the problem we want to solve: Approximate maximum entropy principles We are given a positive-semidefinite matrix Σ ∈ Rn×n with Σi,i = 1,∀i ∈ [n], which is the covariance matrix of a centered distribution over {−1, 1}n, i.e. Eµ[xixj ] = Σi,j , Eµ[xi] = 0, for a distribution µ : {−1, 1}n → R. We wish to produce a distribution µ̃ : {−1, 1}n → R with pairwise covariances that match the given ones up to constant factors, and entropy within a constant factor of the maximum entropy distribution with covariance Σ. 6 Before stating the result formally, it will be useful to define the following constant: Definition 3.1. Define the constant G = mint∈[−1,1] { 2 π arcsin(t)/t } ≈ 0.64. We will prove the following main theorem: Theorem 3.1 (Main, approximate entropy principle). For any positive-semidefinite matrix Σ with Σi,i = 1,∀i, there is an efficiently sampleable distribution µ̃ : {−1, 1}n → R, which can be sampled as sign(g), where g ∼ N (0,Σ+βI), and satisfies G1+βΣi,j ≤ Eµ̃[xixj ] ≤ 1 1+βΣi,j and has entropy H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β , where β ≥ 1 31/2 . Note µ̃ is in fact very close to the the one which is classically used to round semidefinite relaxations for solving the MAX-CUT problem. [GW95] We will prove Theorem 3.1 in two parts – by first lower bounding the entropy of µ̃, and then by bounding the moments of µ̃. Theorem 3.2. The entropy of the distribution µ̃ satisfies H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β when β ≥ 1 31/2 . Proof. A sample g from N (0, Σ̃) can be produced by sampling g1 ∼ N (0,Σ), g2 ∼ N (0, βI) and setting g = g1+g2. The sum of two multivariate normals is again a multivariate normal. Furthermore, the mean of g is 0, and since g1, g2 are independent, the covariance of g is Σ + βI = Σ̃. Let’s denote the random variable Y = sign(g1 + g2) which is distributed according to µ̃. We wish to lower bound the entropy of Y. Toward that goal, denote the random variable S := {i ∈ [n] : |(g1)i| ≤ cD} for c,D to be chosen. Then, we have: for γ = c−1c , H(Y) ≥ H(Y|S) = ∑ S⊆[n] Pr[S = S]H(Y|S = S) ≥ ∑ S⊆[n],|S|≥γn Pr[S = S]H(Y|S = S) where the first inequality follows since conditioning doesn’t decrease entropy, and the latter by the non-negativity of entropy. Continue the calculation we can get:∑ S⊆[n],|S|≥γn Pr[S = S]H(Y|S = S) ≥ ∑ S⊆[n],|S|≥γn Pr[S = S] min S⊆[n],|S|≥γn H(Y|S = S) = Pr [|S| ≥ γn] min S⊆[n],|S|≥γn H(Y|S = S) We will lower bound Pr[|S| ≥ γn] first. Notice that E[ ∑n i=1(g1) 2 i ] = n, therefore by Markov’s inequality, Pr [ n∑ i=1 (g1) 2 i ≥ Dn ] ≤ 1 D . On the other hand, if ∑n i=1(g1) 2 i ≤ Dn, then |{i : (g1)2i ≥ cD}| ≤ nc , which means that |{i : (g1) 2 i ≤ cD}| ≥ n− nc = (c−1)n c = γn. Putting things together, this means Pr [|S| ≥ γn] ≥ 1− 1 D . It remains to lower bound minS⊆[n],|S|≥γnH(Y|S = S). For every S ⊆ [n], |S| ≥ γn, denote by YS the coordinates of Y restricted to S, we get H(Y|S = S) ≥ H(YS |S = S) ≥ H∞(YS |S = S) = − log(max yS Pr[YS = yS |S = S]) 6Note for a distribution over {−1, 1}n, the maximal entropy a distribution can have is n, which is achieved by the uniform distribution. (where H∞ is the min-entropy) so we only need to bound maxyS Pr[YS = yS |S = S] We will now, for any yS , upper bound Pr[YS = yS |S = S]. Recall that the event S = S implies that ∀i ∈ S, |(g1)i| ≤ cD. Since g2 is independent of g1, we know that for every fixed g ∈ Rn: Pr[YS = yS |S = S, g1 = g] = Πi∈S Pr[sign([g]i + [g2]i) = yi] For a fixed i ∈ [S], consider the term Pr[sign([g]i + [g2]i) = yi]. Without loss of generality, let’s assume [g]i > 0 (the proof is completely symmetric in the other case). Then, since [g]i is positive and g2 has mean 0, we have Pr[[g]i + (g2)i < 0] ≤ 1 2 . Moreover, Pr [[g]i + [g2]i > 0] = Pr[[g2]i > 0] Pr [[g]i + [g2]i > 0 | [g2]i > 0] + Pr[[g2]i < 0] Pr [[g]i + [g2]i > 0 | [g2]i < 0] The first term is upper bounded by 12 since Pr[[g2]i > 0] ≤ 1 2 . The second term we will bound using standard Gaussian tail bounds: Pr [[g]i + [g2]i > 0 | [g2]i < 0] ≤ Pr [|[g2]i| ≤ |[g]i| | [g2]i < 0] = Pr[|[g2]i| ≤ |[g]i|] ≤ Pr[([g2]i)2 ≤ cD] = 1− Pr[([g2]i)2 > cD] ≤ 1− 2√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 which implies Pr[[g2]i < 0] Pr[[g]i + [g2]i > 0 | [g2]i < 0] ≤ 1 2 ( 1− 2√ 2π exp (−cD/2β) (√ β cD − (√ β cD )3)) Putting together, we have Pr[sign((g1)i + (g2)i) = yi] ≤ 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 Together with the fact that |S| ≥ γn we get Pr[YS = yS |S = s, g1 = g] ≤ 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3γn which implies that H(Y) ≥ − ( 1− 1 D ) (c− 1)n c log 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 By setting c = D = 31/4 √ β and a straightforward (albeit unpleasant) calculation, we can check that H(Y) ≥ n25 (31/4 √ β−1)2√ 3β , as we need. We next show that the moments of the distribution are preserved up to a constant G1+β . Lemma 3.1. The distribution µ̃ has G1+βΣi,j ≤ Eµ̃[XiXj ] ≤ 1 1+βΣi,j Proof. Consider the Gram decomposition of Σ̃i,j = 〈vi, vj〉. Then, N (0, Σ̃) is in distribution equal to (sign(〈v1, s〉), . . . , sign(〈vn, s〉)) where s ∼ N (0, I). Similarly as in the analysis of GoemansWilliamson [GW95], if v̄i = 1‖vi‖vi, we have G〈v̄i, v̄j〉 ≤ Eµ̃[XiXj ] = 2 π arcsin(〈v̄i, v̄j〉) ≤ 〈v̄i, v̄j〉. However, since 〈v̄i, v̄j〉 = 1 ‖vi‖‖vj‖ 〈vi, vj〉 = 1 ‖vi‖‖vj‖ Σ̃i,j = 1 ‖vi‖‖vj‖ Σi,j and ‖vi‖ =√ Σ̃i,i = √ 1 + β,∀i ∈ [1, n], we get that G 1 + β Σi,j ≤ Eµ̃[XiXj ] ≤ 1 1 + β Σi,j as we want. Lemma 3.2 and 3.1 together imply Theorem 3.1. 4 Provable bounds for variational methods We will in this section consider applications of the approximate maximum entropy principles we developed for calculating partition functions of Ising models. Before we dive into the results, we give brief preliminaries on variational methods and pseudo-moment convex relaxations. Preliminaries on variational methods and pseudo-moment convex relaxations Recall, variational methods are based on the following simple lemma, which characterizes logZ as the solution of an optimization problem. It essentially dates back to Gibbs [Ell12], who used it in the context of statistical mechanics, though it has been rediscovered by machine learning researchers [WJ08]: Lemma 4.1 (Variational characterization of logZ). Let us denote byM the polytope of distributions over {−1, 1}n. Then, logZ = max µ∈M {∑ t JtEµ[φt(x)] +H(µ) } (1) While the above lemma reduces calculating logZ to an optimization problem, optimizing over the polytopeM is impossible in polynomial time. We will proceed in a way which is natural for optimization problems – by instead optimizing over a relaxationM′ of that polytope. The relaxation will be associated with the degree-2 Lasserre hierarchy. Intuitively, M′ has as variables tentative pairwise moments of a distribution of {−1, 1}n, and it imposes all constraints on the moments that hold for distributions over {−1, 1}n. To defineM′ more precisely we will need the following notion: (for a more in-depth review of moment-based convex hierarchies, the reader can consult [BKS14]) Definition 4.1. A degree-2 pseudo-moment 7 Ẽν [·] is a linear operator mapping polynomials of degree 2 to R, such that Ẽν [x2i ] = 1, and Ẽν [p(x)2] ≥ 0 for any polynomial p(x) of degree 1. We will be optimizing over the polytopeM′ of all degree-2 pseudo-moments, i.e. we will consider solving max Ẽν [·]∈M′ {∑ t JtẼν [φt(x)] + H̃(Ẽν [·]) } where H̃ will be a proxy for the entropy we will have to define (since entropy is a global property that depends on all moments, and Ẽν only contains information about second order moments). To see this optimization problem is convex, we show that it can easily be written as a semidefinite program. Namely, note that the pseudo-moment operators are linear, so it suffices to define them over monomials only. Hence, the variables will simply be Ẽν(xS) for all monomials xS of degree at most 2. The constraints Ẽν [x2i ] = 1 then are clearly linear, as is the “energy part” of the objective function. So we only need to worry about the constraint Ẽν [p(x)2] ≥ 0 and the entropy functional. We claim the constraint Ẽν [p(x)2] ≥ 0 can be written as a PSD constraint: namely if we define the matrix Q, which is indexed by all the monomials of degree at most 1, and it satisfies Q(xS ,xT ) = Ẽν [xSxT ]. It is easy to see that Ẽν [p(x)2] ≥ 0 ≡ Q 0. 7The reason Ẽν [·] is called a pseudo-moment, is that it behaves like the moments of a distribution ν : {−1, 1}n → [0, 1], albeit only over polynomials of degree at most 2. Hence, the final concern is how to write an expression for the entropy in terms of the low-order moments, since entropy is a global property that depends on all moments. There are many candidates for this in machine learning are like Bethe/Kikuchi entropy, tree-reweighted Bethe entropy, logdeterminant etc. However, in the worst case – none of them come with any guarantees. We will in fact show that the entropy functional is not an issue – we will relax the entropy trivially to n. Given all of this, the final relaxation we will consider is: max Ẽν [·]∈M′ {∑ t JtẼν [φt(x)] + n } (2) From the prior setup it is clear that the solution to 2 is an upper bound to logZ . To prove a claim like Theorem 2.3 or Theorem 2.4, we will then provide a rounding of the solution. In this instance, this will mean producing a distribution µ̃ which has the value of ∑ t JtEµ̃[φt(x)] +H(µ̃) comparable to the value of the solution. Note this is slightly different than the usual requirement in optimization, where one cares only about producing a single x ∈ {−1, 1}n with comparable value to the solution. Our distribution µ̃ will have entropy Ω(n), and preserves the “energy” portion of the objective∑ t JtEµ[φt(x)] up to a comparable factor to what is achievable in the optimization setting. Warmup: exponential family analogue of MAX-CUT As a warmup, to illustrate the basic ideas behind the above rounding strategy, before we consider Ising models we consider the exponential family analogue of MAX-CUT. It is defined by the functionals φi,j(x) = (xi − xj)2. Concretely, we wish to approximate the partition function of the distribution µ(x) ∝ exp ∑ i,j Ji,j(xi − xj)2 . We will prove the following simple observation: Observation 4.1. The relaxation 2 provides a factor 2 approximation of logZ . Proof. We proceed as outlined in the previous section, by providing a rounding of 2. We point out again, unlike the standard case in optimization, where typically one needs to produce an assignment of the variables, because of the entropy term here it is crucial that the rounding produces a distribution. The distribution µ̃ we produce here will be especially simple: we will round each xi independently with probability 12 . Then, clearly H(µ̃) = n. On the other hand, we similarly have Prµ̃[(xi− xj) 2 = 1] = 12 , since xi and xj are rounded independently. Hence, Eµ̃[(xi − xj) 2] ≥ 12 . Altogether, this implies ∑ i,j Ji,jEµ̃[(xi − xj)2] +H(µ̃) ≥ 1 2 (∑ i,j Ji,jEν [(xi − xj)2] + n ) as we needed. 4.1 Ising models We proceed with the main results of this section on Ising models, which is the case where φi,j(x) = xixj . We will split into the ferromagnetic and general case separately, as outlined in Section 2. To be concrete, we will be given potentials Ji,j , and we wish to calculate the partition function of the Ising model µ(x) ∝ exp( ∑ i,j Ji,jxixj). Ferromagnetic case Recall, in the ferromagnetic case of Ising model, we have the conditions that the potentials Ji,j > 0. We will provide a convex relaxation which has a constant factor approximation in this case. First, recall the famous first Griffiths inequality due to Griffiths [Gri67] which states that in the ferromagnetic case, Eµ[xixj ] ≥ 0,∀i, j. Using this inequality, we will look at the following natural strenghtening of the relaxation 2: max Ẽν [·]∈M′;Ẽν [xixj ]≥0,∀i,j {∑ t JtẼν [φt(x)] + n } (3) We will prove the following theorem, as a straightforward implication of our claims from Section 3: Theorem 4.1. The relaxation 3 provides a factor 50 approximation of logZ . Proof. Notice, due to Griffiths’ inequality, 3 is in fact a relaxation of the Gibbs variational principle and hence an upper bound)of logZ . Same as before, we will provide a rounding of 3. We will use the distribution µ̃ we designed in Section 3 the sign of a Gaussian with covariance matrix Σ + βI , for a β which we will specify. By Lemma 3.2, we then have H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β whenever β ≥ 1 31/2 . By Lemma 3.1, on the other hand, we can prove that Eµ̃[xixj ] ≥ G 1 + β Ẽν [xixj ] By setting β = 21.8202, we get n25 (31/4 √ β−1)2√ 3β ≥ 0.02 and G1+β ≥ 0.02, which implies that ∑ i,j Ji,jEµ̃[xixj ] +H(µ̃) ≥ 0.02 ∑ i,j Ji,jẼν [xixj ] + n which is what we need. Note that the above proof does not work in the general Ising model case: when Ẽν [xixj ] can be either positive or negative, even if we preserved each Ẽν [xixj ] up to a constant factor, this may not preserve the sum ∑ i,j Ji,jẼν [xixj ] due to cancellations in that expression. General Ising models case Finally, we will tackle the general Ising model case. As noted in the previous section, the straightforward application of the results proven in Section 3 doesn’t work, so we have to consider a different rounding – again inspired by roundings used in optimization. The intuition is the same as in the ferromagnetic case: we wish to design a rounding which preserves the “energy” portion of the objective, while having a high entropy. In the previous section, this was achieved by modifying the Goemans-Williamson rounding so that it produces a high-entropy distribution. We will do a similar thing here, by modifying rounding due to [CW04] and [AMMN06]. The convex relaxation we will consider will just be the basic one: 2 and we will prove the following two theorems: Theorem 4.2. The relaxation 2 provides a factor O(log n) approximation to logZ when φi,j(x) = xixj . Theorem 4.3. The relaxation 2 provides a factor O(log(χ(G))) approximation to logZ when φi,j(x) = xixj for i, j ∈ E(G) of some graph G = (V (G), E(G)), and χ(G) is the chromatic number of G. Since the chromatic number of a graph is bounded by n, the second theorem is in fact strictly stronger than the first, however the proof of the first theorem uses less heavy machinery, and is illuminating enough to be presented on its own. Due to space constraints, the proofs of these theorems are forwarded to the appendix. 5 Conclusion In summary, we presented computationally efficient approximate versions of the classical maxentropy principle by [Jay57]: efficiently sampleable distributions which preserve given pairwise moments up to a multiplicative constant factor, while having entropy within a constant factor of the maximum entropy distribution matching those moments. Additionally, we applied our insights to designing provable variational methods for Ising models which provide comparable guarantees for approximating the log-partition function to those in the optimization setting. Our methods are based on convex relaxations of the standard variational principle due to Gibbs, and are extremely generic and we hope they will find applications for other exponential families.
1. What is the focus of the paper regarding maximum-entropy distributions and log-partition functions? 2. What are the strengths of the paper in terms of technical contributions and novelty? 3. What are the weaknesses of the paper regarding its motivation and impact? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any suggestions for improving the paper or extending its results to make it more relevant and impactful?
Review
Review The authors provide efficiently sample-able approximations to max-entropy distributions given pairwise moments. They also provide bounds on approximating the log-partition function given pairwise approximating distributions for the Ising model.The technical contributions of this paper are well-explained and appear correct. My main problem with the paper is motivation. I was never convinced that the maximum-entropy principle was useful, so being able to approximate maxent distributions doesn't seem especially important. The second part of the results are more interesting. As far as I understand, there are few bounds on log-partition functions. I'm not very familiar with the state of the literature, but this seems like an interesting result that could lead to further useful theorems or methods. The authors mention that their results might extent to other exponential families. However, without any other examples, and no experiments, I think this paper belongs in COLT rather than NIPS. If the paper took more steps towards turning their result into concrete methodological improvements, it might be able to be turned into a NIPS paper. Typos: Line 212 - in what sense are \phi functionals? aren't they just functions? Line 215 and 216. In black-and-white, it's confusing that you refer to the numeral 2 and equation 2 in the same way in the same sentence. Line 304 - isolated footnote
NIPS
Title Approximate maximum entropy principles via Goemans-Williamson with applications to provable variational methods Abstract The well known maximum-entropy principle due to Jaynes, which states that given mean parameters, the maximum entropy distribution matching them is in an exponential family has been very popular in machine learning due to its “Occam’s razor” interpretation. Unfortunately, calculating the potentials in the maximumentropy distribution is intractable [BGS14]. We provide computationally efficient versions of this principle when the mean parameters are pairwise moments: we design distributions that approximately match given pairwise moments, while having entropy which is comparable to the maximum entropy distribution matching those moments. We additionally provide surprising applications of the approximate maximum entropy principle to designing provable variational methods for partition function calculations for Ising models without any assumptions on the potentials of the model. More precisely, we show that we can get approximation guarantees for the log-partition function comparable to those in the low-temperature limit, which is the setting of optimization of quadratic forms over the hypercube. ([AN06]) 1 Introduction Maximum entropy principle The maximum entropy principle [Jay57] states that given mean parameters, i.e. Eµ[φt(x)] for a family of functionals φt(x), t ∈ [1, T ], where µ is distribution over the hypercube {−1, 1}n, the entropy-maximizing distribution µ is an exponential family distribution, i.e. µ(x) ∝ exp( ∑T t=1 Jtφt(x)) for some potentials Jt, t ∈ [1, T ]. 1 This principle has been one of the reasons for the popularity of graphical models in machine learning: the “maximum entropy” assumption is interpreted as “minimal assumptions” on the distribution other than what is known about it. However, this principle is problematic from a computational point of view. Due to results of [BGS14, SV14], the potentials Jt of the Ising model, in many cases, are impossible to estimate well in polynomial time, unless NP = RP – so merely getting the description of the maximum entropy distribution is already hard. Moreover, in order to extract useful information about this distribution, usually we would also like to at least be able to sample efficiently from this distribution – which is typically NP-hard or even #P-hard. 1There is a more general way to state this principle over an arbitrary domain, not just the hypercube, but for clarity in this paper we will focus on the hypercube only. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. In this paper we address this problem in certain cases. We provide a “bi-criteria” approximation for the special case where the functionals φt(x) are φi,j(x) = xixj , i.e. pairwise moments: we produce a efficiently sampleable distribution over the hypercube which matches these moments up to multiplicative constant factors, and has entropy at most a constant factor smaller from from the entropy of the maximum entropy distribution. 2 Furthermore, the distribution which achieves this is very natural: the sign of a multivariate normal variable. This provides theoretical explanation for the phenomenon observed by the computational neuroscience community [BB07] that this distribution (there named dichotomized Gaussian there) has near-maximum entropy. Variational methods The above results also allow us to get results for a seemingly unrelated problem – approximating the partition function Z = ∑ x∈{−1,1}n exp( ∑T t=1 Jtφt(x)) of a member of an exponential family. The reason this task is important is that it is tied to calculating marginals. One of the ways this task is solved is variational methods: namely, expressing logZ as an optimization problem. While there is a plethora of work on variational methods, of many flavors (mean field, Bethe/Kikuchi relaxations, TRBP, etc. for a survey, see [WJ08]), they typically come either with no guarantees, or with guarantees in very constrained cases (e.g. loopless graphs; graphs with large girth, etc. [WJW03, WJW05]). While this is a rich area of research, the following extremely basic research question has not been answered: What is the best approximation guarantee on the partition function in the worst case (with no additional assumptions on the potentials)? In the low-temperature limit, i.e. when |Jt| → ∞, logZ → maxx∈{−1,1}n ∑T t=1 Jtφt(x) - i.e. the question reduces to purely optimization. In this regime, this question has very satisfying answers for many families φt(x). One classical example is when the functionals are φi,j(x) = xixj . In the graphical model community, these are known as Ising models, and in the optimization community this is the problem of optimizing quadratic forms and has been studied by [CW04, AN06, AMMN06]. In the optimization version, the previous papers showed that in the worst case, one can get O(log n) factor multiplicative factor approximation of it, and that unless P = NP, one cannot get better than constant factor approximations of it. In the finite-temperature version, it is known that it is NP-hard to achieve a 1 + factor approximation to the partition function (i.e. construct a FPRAS) [SS12], but nothing is known about coarser approximations. We prove in this paper, informally, that one can get comparable multiplicative guarantees on the log-partition function in the finite temperature case as well – using the tools and insights we develop on the maximum entropy principles. Our methods are extremely generic, and likely to apply to many other exponential families, where algorithms based on linear/semidefinite programming relaxations are known to give good guarantees in the optimization regime. 2 Statements of results and prior work Approximate maximum entropy The main theorem in this section is the following one. Theorem 2.1. For any covariance matrix Σ of a centered distribution µ : {−1, 1}n → R, i.e. Eµ[xixj ] = Σi,j , Eµ[xi] = 0, there is an efficiently sampleable distribution µ̃, which can be sampled as sign(g), where g ∼ N (0,Σ + βI) and satisfies G 1 + β Σi,j ≤ Eµ̃[XiXj ] ≤ 1 1 + β Σi,j and has entropy H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β , for any β ≥ 1 31/2 . There are two prior works on computational issues relating to maximum entropy principles, both proving hardness results. [BGS14] considers the “hard-core” model where the functionals φt are such that the distribution µ(x) puts zero mass on configurations x which are not independent sets with respect to some graph G. 2In fact, we produce a distribution with entropy Ω(n), which implies the latter claim since the maximum entropy of any distribution of over {−1, 1}n is at most n They show that unless NP = RP, there is no FPRAS for calculating the potentials Jt, given the mean parameters Eµ[φt(x)]. [SV14] prove an equivalence between calculating the mean parameters and calculating partition functions. More precisely, they show that given an oracle that can calculate the mean parameters up to a (1 + ) multiplicative factor in time O(poly(1/ )), one can calculate the partition function of the same exponential family up to (1 +O(poly( ))) multiplicative factor, in time O(poly(1/ )). Note, the in this work potentially needs to be polynomially small in n (i.e. an oracle that can calculate the mean parameters to a fixed multiplicative constant cannot be used.) Both results prove hardness for fine-grained approximations to the maximum entropy principle, and ask for outputting approximations to the mean parameters. Our result circumvents these hardness results by providing a distribution which is not in the maximum-entropy exponential family, and is allowed to only approximately match the moments as well. To the best of our knowledge, such an approximation, while very natural, has not been considered in the literature. Provable variational methods The main theorems in this section will concern the approximation factor that can be achieved by degree-2 pseudo-moment relaxations of the standard variational principle due to Gibbs. ([Ell12]) As outlined before, we will be concerned with a particularly popular exponential family: Ising models. We will prove the following three results: Theorem 2.2 (Ferromagnetic Ising, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor 50 the value of logZ where Z is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j Ji,jxixj) for Ji,j > 0. Theorem 2.3 (Ising model, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor O(log n) the value of logZ where Z is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j Ji,jxixj). Theorem 2.4 (Ising model, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor O(logχ(G)) the value of logZ whereZ is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j∈E(G) Ji,jxixj) where G = (V (G), E(G)) is a graph with chromatic number χ(G). 3 While a lot of work is done on variational methods in general (see the survey by [WJ08] for a detailed overview), to the best of our knowledge nothing is known about the worst-case guarantee that we are interested in here. Moreover, other than a recent paper by [Ris16], no other work has provided provable bounds for variational methods that proceed via a convex relaxation and a rounding thereof.4 [Ris16] provides guarantees in the case of Ising models that are also based on pseudo-moment relaxations of the variational principle, albeit only in the special case when the graph is “dense” in a suitably defined sense. 5 The results there are very specific to the density assumption and can not be adapted to our worst-case setting. Finally, we mention that in the special case of the ferromagnetic Ising models, an algorithm based on MCMC was provided by [JS93], which can give an approximation factor of (1 + ) to the partition function and runs in time O(n11poly(1/ )). In spite of this, the focus of this part of our paper is to provide understanding of variational methods in certain cases, as they continue to be popular in practice for their faster running time compared to MCMC-based methods but are theoretically much more poorly studied. 3Theorem 2.4 is strictly more general than Theorem 2.3, however the proof of Theorem 2.3 uses less heavy machinery and is illuminating enough that we feel merits being presented as a separate theorem. 4In some sense, it is possible to give provable bounds for Bethe-entropy based relaxations, via analyzing belief propagation directly, which has been done in cases where there is correlation decay and the graph is locally tree-like. [WJ08] has a detailed overview of such results. 5More precisely, they prove that in the case when ∀i, j,∆|Ji,j | ≤ ∆n2 ∑ i,j |Ji,j |, one can get an additive ( ∑ i,j Ji,j) approximation to logZ in time n O( ∆ 2 ). 3 Approximate maximum entropy principles Let us recall what the problem we want to solve: Approximate maximum entropy principles We are given a positive-semidefinite matrix Σ ∈ Rn×n with Σi,i = 1,∀i ∈ [n], which is the covariance matrix of a centered distribution over {−1, 1}n, i.e. Eµ[xixj ] = Σi,j , Eµ[xi] = 0, for a distribution µ : {−1, 1}n → R. We wish to produce a distribution µ̃ : {−1, 1}n → R with pairwise covariances that match the given ones up to constant factors, and entropy within a constant factor of the maximum entropy distribution with covariance Σ. 6 Before stating the result formally, it will be useful to define the following constant: Definition 3.1. Define the constant G = mint∈[−1,1] { 2 π arcsin(t)/t } ≈ 0.64. We will prove the following main theorem: Theorem 3.1 (Main, approximate entropy principle). For any positive-semidefinite matrix Σ with Σi,i = 1,∀i, there is an efficiently sampleable distribution µ̃ : {−1, 1}n → R, which can be sampled as sign(g), where g ∼ N (0,Σ+βI), and satisfies G1+βΣi,j ≤ Eµ̃[xixj ] ≤ 1 1+βΣi,j and has entropy H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β , where β ≥ 1 31/2 . Note µ̃ is in fact very close to the the one which is classically used to round semidefinite relaxations for solving the MAX-CUT problem. [GW95] We will prove Theorem 3.1 in two parts – by first lower bounding the entropy of µ̃, and then by bounding the moments of µ̃. Theorem 3.2. The entropy of the distribution µ̃ satisfies H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β when β ≥ 1 31/2 . Proof. A sample g from N (0, Σ̃) can be produced by sampling g1 ∼ N (0,Σ), g2 ∼ N (0, βI) and setting g = g1+g2. The sum of two multivariate normals is again a multivariate normal. Furthermore, the mean of g is 0, and since g1, g2 are independent, the covariance of g is Σ + βI = Σ̃. Let’s denote the random variable Y = sign(g1 + g2) which is distributed according to µ̃. We wish to lower bound the entropy of Y. Toward that goal, denote the random variable S := {i ∈ [n] : |(g1)i| ≤ cD} for c,D to be chosen. Then, we have: for γ = c−1c , H(Y) ≥ H(Y|S) = ∑ S⊆[n] Pr[S = S]H(Y|S = S) ≥ ∑ S⊆[n],|S|≥γn Pr[S = S]H(Y|S = S) where the first inequality follows since conditioning doesn’t decrease entropy, and the latter by the non-negativity of entropy. Continue the calculation we can get:∑ S⊆[n],|S|≥γn Pr[S = S]H(Y|S = S) ≥ ∑ S⊆[n],|S|≥γn Pr[S = S] min S⊆[n],|S|≥γn H(Y|S = S) = Pr [|S| ≥ γn] min S⊆[n],|S|≥γn H(Y|S = S) We will lower bound Pr[|S| ≥ γn] first. Notice that E[ ∑n i=1(g1) 2 i ] = n, therefore by Markov’s inequality, Pr [ n∑ i=1 (g1) 2 i ≥ Dn ] ≤ 1 D . On the other hand, if ∑n i=1(g1) 2 i ≤ Dn, then |{i : (g1)2i ≥ cD}| ≤ nc , which means that |{i : (g1) 2 i ≤ cD}| ≥ n− nc = (c−1)n c = γn. Putting things together, this means Pr [|S| ≥ γn] ≥ 1− 1 D . It remains to lower bound minS⊆[n],|S|≥γnH(Y|S = S). For every S ⊆ [n], |S| ≥ γn, denote by YS the coordinates of Y restricted to S, we get H(Y|S = S) ≥ H(YS |S = S) ≥ H∞(YS |S = S) = − log(max yS Pr[YS = yS |S = S]) 6Note for a distribution over {−1, 1}n, the maximal entropy a distribution can have is n, which is achieved by the uniform distribution. (where H∞ is the min-entropy) so we only need to bound maxyS Pr[YS = yS |S = S] We will now, for any yS , upper bound Pr[YS = yS |S = S]. Recall that the event S = S implies that ∀i ∈ S, |(g1)i| ≤ cD. Since g2 is independent of g1, we know that for every fixed g ∈ Rn: Pr[YS = yS |S = S, g1 = g] = Πi∈S Pr[sign([g]i + [g2]i) = yi] For a fixed i ∈ [S], consider the term Pr[sign([g]i + [g2]i) = yi]. Without loss of generality, let’s assume [g]i > 0 (the proof is completely symmetric in the other case). Then, since [g]i is positive and g2 has mean 0, we have Pr[[g]i + (g2)i < 0] ≤ 1 2 . Moreover, Pr [[g]i + [g2]i > 0] = Pr[[g2]i > 0] Pr [[g]i + [g2]i > 0 | [g2]i > 0] + Pr[[g2]i < 0] Pr [[g]i + [g2]i > 0 | [g2]i < 0] The first term is upper bounded by 12 since Pr[[g2]i > 0] ≤ 1 2 . The second term we will bound using standard Gaussian tail bounds: Pr [[g]i + [g2]i > 0 | [g2]i < 0] ≤ Pr [|[g2]i| ≤ |[g]i| | [g2]i < 0] = Pr[|[g2]i| ≤ |[g]i|] ≤ Pr[([g2]i)2 ≤ cD] = 1− Pr[([g2]i)2 > cD] ≤ 1− 2√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 which implies Pr[[g2]i < 0] Pr[[g]i + [g2]i > 0 | [g2]i < 0] ≤ 1 2 ( 1− 2√ 2π exp (−cD/2β) (√ β cD − (√ β cD )3)) Putting together, we have Pr[sign((g1)i + (g2)i) = yi] ≤ 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 Together with the fact that |S| ≥ γn we get Pr[YS = yS |S = s, g1 = g] ≤ 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3γn which implies that H(Y) ≥ − ( 1− 1 D ) (c− 1)n c log 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 By setting c = D = 31/4 √ β and a straightforward (albeit unpleasant) calculation, we can check that H(Y) ≥ n25 (31/4 √ β−1)2√ 3β , as we need. We next show that the moments of the distribution are preserved up to a constant G1+β . Lemma 3.1. The distribution µ̃ has G1+βΣi,j ≤ Eµ̃[XiXj ] ≤ 1 1+βΣi,j Proof. Consider the Gram decomposition of Σ̃i,j = 〈vi, vj〉. Then, N (0, Σ̃) is in distribution equal to (sign(〈v1, s〉), . . . , sign(〈vn, s〉)) where s ∼ N (0, I). Similarly as in the analysis of GoemansWilliamson [GW95], if v̄i = 1‖vi‖vi, we have G〈v̄i, v̄j〉 ≤ Eµ̃[XiXj ] = 2 π arcsin(〈v̄i, v̄j〉) ≤ 〈v̄i, v̄j〉. However, since 〈v̄i, v̄j〉 = 1 ‖vi‖‖vj‖ 〈vi, vj〉 = 1 ‖vi‖‖vj‖ Σ̃i,j = 1 ‖vi‖‖vj‖ Σi,j and ‖vi‖ =√ Σ̃i,i = √ 1 + β,∀i ∈ [1, n], we get that G 1 + β Σi,j ≤ Eµ̃[XiXj ] ≤ 1 1 + β Σi,j as we want. Lemma 3.2 and 3.1 together imply Theorem 3.1. 4 Provable bounds for variational methods We will in this section consider applications of the approximate maximum entropy principles we developed for calculating partition functions of Ising models. Before we dive into the results, we give brief preliminaries on variational methods and pseudo-moment convex relaxations. Preliminaries on variational methods and pseudo-moment convex relaxations Recall, variational methods are based on the following simple lemma, which characterizes logZ as the solution of an optimization problem. It essentially dates back to Gibbs [Ell12], who used it in the context of statistical mechanics, though it has been rediscovered by machine learning researchers [WJ08]: Lemma 4.1 (Variational characterization of logZ). Let us denote byM the polytope of distributions over {−1, 1}n. Then, logZ = max µ∈M {∑ t JtEµ[φt(x)] +H(µ) } (1) While the above lemma reduces calculating logZ to an optimization problem, optimizing over the polytopeM is impossible in polynomial time. We will proceed in a way which is natural for optimization problems – by instead optimizing over a relaxationM′ of that polytope. The relaxation will be associated with the degree-2 Lasserre hierarchy. Intuitively, M′ has as variables tentative pairwise moments of a distribution of {−1, 1}n, and it imposes all constraints on the moments that hold for distributions over {−1, 1}n. To defineM′ more precisely we will need the following notion: (for a more in-depth review of moment-based convex hierarchies, the reader can consult [BKS14]) Definition 4.1. A degree-2 pseudo-moment 7 Ẽν [·] is a linear operator mapping polynomials of degree 2 to R, such that Ẽν [x2i ] = 1, and Ẽν [p(x)2] ≥ 0 for any polynomial p(x) of degree 1. We will be optimizing over the polytopeM′ of all degree-2 pseudo-moments, i.e. we will consider solving max Ẽν [·]∈M′ {∑ t JtẼν [φt(x)] + H̃(Ẽν [·]) } where H̃ will be a proxy for the entropy we will have to define (since entropy is a global property that depends on all moments, and Ẽν only contains information about second order moments). To see this optimization problem is convex, we show that it can easily be written as a semidefinite program. Namely, note that the pseudo-moment operators are linear, so it suffices to define them over monomials only. Hence, the variables will simply be Ẽν(xS) for all monomials xS of degree at most 2. The constraints Ẽν [x2i ] = 1 then are clearly linear, as is the “energy part” of the objective function. So we only need to worry about the constraint Ẽν [p(x)2] ≥ 0 and the entropy functional. We claim the constraint Ẽν [p(x)2] ≥ 0 can be written as a PSD constraint: namely if we define the matrix Q, which is indexed by all the monomials of degree at most 1, and it satisfies Q(xS ,xT ) = Ẽν [xSxT ]. It is easy to see that Ẽν [p(x)2] ≥ 0 ≡ Q 0. 7The reason Ẽν [·] is called a pseudo-moment, is that it behaves like the moments of a distribution ν : {−1, 1}n → [0, 1], albeit only over polynomials of degree at most 2. Hence, the final concern is how to write an expression for the entropy in terms of the low-order moments, since entropy is a global property that depends on all moments. There are many candidates for this in machine learning are like Bethe/Kikuchi entropy, tree-reweighted Bethe entropy, logdeterminant etc. However, in the worst case – none of them come with any guarantees. We will in fact show that the entropy functional is not an issue – we will relax the entropy trivially to n. Given all of this, the final relaxation we will consider is: max Ẽν [·]∈M′ {∑ t JtẼν [φt(x)] + n } (2) From the prior setup it is clear that the solution to 2 is an upper bound to logZ . To prove a claim like Theorem 2.3 or Theorem 2.4, we will then provide a rounding of the solution. In this instance, this will mean producing a distribution µ̃ which has the value of ∑ t JtEµ̃[φt(x)] +H(µ̃) comparable to the value of the solution. Note this is slightly different than the usual requirement in optimization, where one cares only about producing a single x ∈ {−1, 1}n with comparable value to the solution. Our distribution µ̃ will have entropy Ω(n), and preserves the “energy” portion of the objective∑ t JtEµ[φt(x)] up to a comparable factor to what is achievable in the optimization setting. Warmup: exponential family analogue of MAX-CUT As a warmup, to illustrate the basic ideas behind the above rounding strategy, before we consider Ising models we consider the exponential family analogue of MAX-CUT. It is defined by the functionals φi,j(x) = (xi − xj)2. Concretely, we wish to approximate the partition function of the distribution µ(x) ∝ exp ∑ i,j Ji,j(xi − xj)2 . We will prove the following simple observation: Observation 4.1. The relaxation 2 provides a factor 2 approximation of logZ . Proof. We proceed as outlined in the previous section, by providing a rounding of 2. We point out again, unlike the standard case in optimization, where typically one needs to produce an assignment of the variables, because of the entropy term here it is crucial that the rounding produces a distribution. The distribution µ̃ we produce here will be especially simple: we will round each xi independently with probability 12 . Then, clearly H(µ̃) = n. On the other hand, we similarly have Prµ̃[(xi− xj) 2 = 1] = 12 , since xi and xj are rounded independently. Hence, Eµ̃[(xi − xj) 2] ≥ 12 . Altogether, this implies ∑ i,j Ji,jEµ̃[(xi − xj)2] +H(µ̃) ≥ 1 2 (∑ i,j Ji,jEν [(xi − xj)2] + n ) as we needed. 4.1 Ising models We proceed with the main results of this section on Ising models, which is the case where φi,j(x) = xixj . We will split into the ferromagnetic and general case separately, as outlined in Section 2. To be concrete, we will be given potentials Ji,j , and we wish to calculate the partition function of the Ising model µ(x) ∝ exp( ∑ i,j Ji,jxixj). Ferromagnetic case Recall, in the ferromagnetic case of Ising model, we have the conditions that the potentials Ji,j > 0. We will provide a convex relaxation which has a constant factor approximation in this case. First, recall the famous first Griffiths inequality due to Griffiths [Gri67] which states that in the ferromagnetic case, Eµ[xixj ] ≥ 0,∀i, j. Using this inequality, we will look at the following natural strenghtening of the relaxation 2: max Ẽν [·]∈M′;Ẽν [xixj ]≥0,∀i,j {∑ t JtẼν [φt(x)] + n } (3) We will prove the following theorem, as a straightforward implication of our claims from Section 3: Theorem 4.1. The relaxation 3 provides a factor 50 approximation of logZ . Proof. Notice, due to Griffiths’ inequality, 3 is in fact a relaxation of the Gibbs variational principle and hence an upper bound)of logZ . Same as before, we will provide a rounding of 3. We will use the distribution µ̃ we designed in Section 3 the sign of a Gaussian with covariance matrix Σ + βI , for a β which we will specify. By Lemma 3.2, we then have H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β whenever β ≥ 1 31/2 . By Lemma 3.1, on the other hand, we can prove that Eµ̃[xixj ] ≥ G 1 + β Ẽν [xixj ] By setting β = 21.8202, we get n25 (31/4 √ β−1)2√ 3β ≥ 0.02 and G1+β ≥ 0.02, which implies that ∑ i,j Ji,jEµ̃[xixj ] +H(µ̃) ≥ 0.02 ∑ i,j Ji,jẼν [xixj ] + n which is what we need. Note that the above proof does not work in the general Ising model case: when Ẽν [xixj ] can be either positive or negative, even if we preserved each Ẽν [xixj ] up to a constant factor, this may not preserve the sum ∑ i,j Ji,jẼν [xixj ] due to cancellations in that expression. General Ising models case Finally, we will tackle the general Ising model case. As noted in the previous section, the straightforward application of the results proven in Section 3 doesn’t work, so we have to consider a different rounding – again inspired by roundings used in optimization. The intuition is the same as in the ferromagnetic case: we wish to design a rounding which preserves the “energy” portion of the objective, while having a high entropy. In the previous section, this was achieved by modifying the Goemans-Williamson rounding so that it produces a high-entropy distribution. We will do a similar thing here, by modifying rounding due to [CW04] and [AMMN06]. The convex relaxation we will consider will just be the basic one: 2 and we will prove the following two theorems: Theorem 4.2. The relaxation 2 provides a factor O(log n) approximation to logZ when φi,j(x) = xixj . Theorem 4.3. The relaxation 2 provides a factor O(log(χ(G))) approximation to logZ when φi,j(x) = xixj for i, j ∈ E(G) of some graph G = (V (G), E(G)), and χ(G) is the chromatic number of G. Since the chromatic number of a graph is bounded by n, the second theorem is in fact strictly stronger than the first, however the proof of the first theorem uses less heavy machinery, and is illuminating enough to be presented on its own. Due to space constraints, the proofs of these theorems are forwarded to the appendix. 5 Conclusion In summary, we presented computationally efficient approximate versions of the classical maxentropy principle by [Jay57]: efficiently sampleable distributions which preserve given pairwise moments up to a multiplicative constant factor, while having entropy within a constant factor of the maximum entropy distribution matching those moments. Additionally, we applied our insights to designing provable variational methods for Ising models which provide comparable guarantees for approximating the log-partition function to those in the optimization setting. Our methods are based on convex relaxations of the standard variational principle due to Gibbs, and are extremely generic and we hope they will find applications for other exponential families.
1. What is the main contribution of the paper regarding Ising models? 2. What are the strengths and weaknesses of the proposed approximation algorithm? 3. Do you have any questions or concerns about the presentation and details of the proof steps? 4. How does the reviewer assess the novelty and significance of the paper's content? 5. Are there any minor issues or suggestions for improvement in the paper?
Review
Review The authors provide an approximation algorithm to the log-partition function of Ising models. The algorithm uses approximation algorithm following the famous Goemans-Williamson MAXCUT algorithm.This is a nice paper, a bit of an odd match for NIPS (there are no numerical experiments, and in spite of claims of genericity and applicability to general exponential families, I remain unconvinced). The methods are elegant, though I did find the presentation a bit lacking. I would have loved a high-level detail of the proof steps and proof intuition, with pointers to precise sub-proposition statements and corresponding proofs. Right now, it is easy to get lost in the details, and what appears to me as the key moments of the proof are skimmed over quickly. For instance, lemma 3.1 deserved to be expanded upon (even the long version is a bit quick on details here) - this is especially since the GW proof technique is so elegant, it's always nice to include (even if similar to the original proof). Similarly, it seems to me the main theorems are in fact theorems 2.2-2.4; not theorem 2.1 (which has a host of odd constants and a bound on entropy which is not clearly interesting at first glance*). Theorem 2.1's existence seems justified by thm 2.2-2.4 - would it make sense to introduce 2.2-4 first, then 2.1 afterwards as a proof technique? Similarly, should the proof for 2.2-2.4 have contained slightly more details? * is the bound interesting in itself? It does not seem to relate to the actual entropy of the max-entropy distribution, so it was not clear to me the result was impressive or not. minor: - Given the paper does not build further than degree-2 pseudomoment, it's not clear that using the language from that hierarchy helps understanding - I think a clear (equation driven) definition of the polytope used in that section would have made for easier read. - Many equations are referred to by number - as in, 'solutions to 2' (line 203, 237); it would preferrable to either use 'solution to equation 2' , or 'solution to (2)'. - In proof of 3.1, can \mathcal{N(0,\tilde \Sigma)} - a continuous vector - really be *equal* in distribution to (sign(v1,s),...), - a binary vector? I realize a lot of the ideas in the paper are related to sampling discrete variables with covariance matrix similar to that of a Gaussian, but at this particular point of the proof, it seems odd. - I don't have a strong intuition for the proofs of the paper - I will say, however, that I am surprised that the entropy terms (which are the very reason there is a difference between MAP and posterior sampling) can be bounded in such trivial ways (bound by the entropy of the uniform), and still obtain interesting results. - In proof of theorem 3.2, line 154, I am surely missing something very simple, but how do have P(g2\leq g)\leq P(g2^2 \leq cD)? - I didn't quite get proof of observation 4.4- we have E_{\tilde \mu}[(x_i-x_j)^2] \geq 1/2. How does The expectation over \nu appear in the following equation?
NIPS
Title Approximate maximum entropy principles via Goemans-Williamson with applications to provable variational methods Abstract The well known maximum-entropy principle due to Jaynes, which states that given mean parameters, the maximum entropy distribution matching them is in an exponential family has been very popular in machine learning due to its “Occam’s razor” interpretation. Unfortunately, calculating the potentials in the maximumentropy distribution is intractable [BGS14]. We provide computationally efficient versions of this principle when the mean parameters are pairwise moments: we design distributions that approximately match given pairwise moments, while having entropy which is comparable to the maximum entropy distribution matching those moments. We additionally provide surprising applications of the approximate maximum entropy principle to designing provable variational methods for partition function calculations for Ising models without any assumptions on the potentials of the model. More precisely, we show that we can get approximation guarantees for the log-partition function comparable to those in the low-temperature limit, which is the setting of optimization of quadratic forms over the hypercube. ([AN06]) 1 Introduction Maximum entropy principle The maximum entropy principle [Jay57] states that given mean parameters, i.e. Eµ[φt(x)] for a family of functionals φt(x), t ∈ [1, T ], where µ is distribution over the hypercube {−1, 1}n, the entropy-maximizing distribution µ is an exponential family distribution, i.e. µ(x) ∝ exp( ∑T t=1 Jtφt(x)) for some potentials Jt, t ∈ [1, T ]. 1 This principle has been one of the reasons for the popularity of graphical models in machine learning: the “maximum entropy” assumption is interpreted as “minimal assumptions” on the distribution other than what is known about it. However, this principle is problematic from a computational point of view. Due to results of [BGS14, SV14], the potentials Jt of the Ising model, in many cases, are impossible to estimate well in polynomial time, unless NP = RP – so merely getting the description of the maximum entropy distribution is already hard. Moreover, in order to extract useful information about this distribution, usually we would also like to at least be able to sample efficiently from this distribution – which is typically NP-hard or even #P-hard. 1There is a more general way to state this principle over an arbitrary domain, not just the hypercube, but for clarity in this paper we will focus on the hypercube only. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. In this paper we address this problem in certain cases. We provide a “bi-criteria” approximation for the special case where the functionals φt(x) are φi,j(x) = xixj , i.e. pairwise moments: we produce a efficiently sampleable distribution over the hypercube which matches these moments up to multiplicative constant factors, and has entropy at most a constant factor smaller from from the entropy of the maximum entropy distribution. 2 Furthermore, the distribution which achieves this is very natural: the sign of a multivariate normal variable. This provides theoretical explanation for the phenomenon observed by the computational neuroscience community [BB07] that this distribution (there named dichotomized Gaussian there) has near-maximum entropy. Variational methods The above results also allow us to get results for a seemingly unrelated problem – approximating the partition function Z = ∑ x∈{−1,1}n exp( ∑T t=1 Jtφt(x)) of a member of an exponential family. The reason this task is important is that it is tied to calculating marginals. One of the ways this task is solved is variational methods: namely, expressing logZ as an optimization problem. While there is a plethora of work on variational methods, of many flavors (mean field, Bethe/Kikuchi relaxations, TRBP, etc. for a survey, see [WJ08]), they typically come either with no guarantees, or with guarantees in very constrained cases (e.g. loopless graphs; graphs with large girth, etc. [WJW03, WJW05]). While this is a rich area of research, the following extremely basic research question has not been answered: What is the best approximation guarantee on the partition function in the worst case (with no additional assumptions on the potentials)? In the low-temperature limit, i.e. when |Jt| → ∞, logZ → maxx∈{−1,1}n ∑T t=1 Jtφt(x) - i.e. the question reduces to purely optimization. In this regime, this question has very satisfying answers for many families φt(x). One classical example is when the functionals are φi,j(x) = xixj . In the graphical model community, these are known as Ising models, and in the optimization community this is the problem of optimizing quadratic forms and has been studied by [CW04, AN06, AMMN06]. In the optimization version, the previous papers showed that in the worst case, one can get O(log n) factor multiplicative factor approximation of it, and that unless P = NP, one cannot get better than constant factor approximations of it. In the finite-temperature version, it is known that it is NP-hard to achieve a 1 + factor approximation to the partition function (i.e. construct a FPRAS) [SS12], but nothing is known about coarser approximations. We prove in this paper, informally, that one can get comparable multiplicative guarantees on the log-partition function in the finite temperature case as well – using the tools and insights we develop on the maximum entropy principles. Our methods are extremely generic, and likely to apply to many other exponential families, where algorithms based on linear/semidefinite programming relaxations are known to give good guarantees in the optimization regime. 2 Statements of results and prior work Approximate maximum entropy The main theorem in this section is the following one. Theorem 2.1. For any covariance matrix Σ of a centered distribution µ : {−1, 1}n → R, i.e. Eµ[xixj ] = Σi,j , Eµ[xi] = 0, there is an efficiently sampleable distribution µ̃, which can be sampled as sign(g), where g ∼ N (0,Σ + βI) and satisfies G 1 + β Σi,j ≤ Eµ̃[XiXj ] ≤ 1 1 + β Σi,j and has entropy H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β , for any β ≥ 1 31/2 . There are two prior works on computational issues relating to maximum entropy principles, both proving hardness results. [BGS14] considers the “hard-core” model where the functionals φt are such that the distribution µ(x) puts zero mass on configurations x which are not independent sets with respect to some graph G. 2In fact, we produce a distribution with entropy Ω(n), which implies the latter claim since the maximum entropy of any distribution of over {−1, 1}n is at most n They show that unless NP = RP, there is no FPRAS for calculating the potentials Jt, given the mean parameters Eµ[φt(x)]. [SV14] prove an equivalence between calculating the mean parameters and calculating partition functions. More precisely, they show that given an oracle that can calculate the mean parameters up to a (1 + ) multiplicative factor in time O(poly(1/ )), one can calculate the partition function of the same exponential family up to (1 +O(poly( ))) multiplicative factor, in time O(poly(1/ )). Note, the in this work potentially needs to be polynomially small in n (i.e. an oracle that can calculate the mean parameters to a fixed multiplicative constant cannot be used.) Both results prove hardness for fine-grained approximations to the maximum entropy principle, and ask for outputting approximations to the mean parameters. Our result circumvents these hardness results by providing a distribution which is not in the maximum-entropy exponential family, and is allowed to only approximately match the moments as well. To the best of our knowledge, such an approximation, while very natural, has not been considered in the literature. Provable variational methods The main theorems in this section will concern the approximation factor that can be achieved by degree-2 pseudo-moment relaxations of the standard variational principle due to Gibbs. ([Ell12]) As outlined before, we will be concerned with a particularly popular exponential family: Ising models. We will prove the following three results: Theorem 2.2 (Ferromagnetic Ising, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor 50 the value of logZ where Z is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j Ji,jxixj) for Ji,j > 0. Theorem 2.3 (Ising model, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor O(log n) the value of logZ where Z is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j Ji,jxixj). Theorem 2.4 (Ising model, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor O(logχ(G)) the value of logZ whereZ is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j∈E(G) Ji,jxixj) where G = (V (G), E(G)) is a graph with chromatic number χ(G). 3 While a lot of work is done on variational methods in general (see the survey by [WJ08] for a detailed overview), to the best of our knowledge nothing is known about the worst-case guarantee that we are interested in here. Moreover, other than a recent paper by [Ris16], no other work has provided provable bounds for variational methods that proceed via a convex relaxation and a rounding thereof.4 [Ris16] provides guarantees in the case of Ising models that are also based on pseudo-moment relaxations of the variational principle, albeit only in the special case when the graph is “dense” in a suitably defined sense. 5 The results there are very specific to the density assumption and can not be adapted to our worst-case setting. Finally, we mention that in the special case of the ferromagnetic Ising models, an algorithm based on MCMC was provided by [JS93], which can give an approximation factor of (1 + ) to the partition function and runs in time O(n11poly(1/ )). In spite of this, the focus of this part of our paper is to provide understanding of variational methods in certain cases, as they continue to be popular in practice for their faster running time compared to MCMC-based methods but are theoretically much more poorly studied. 3Theorem 2.4 is strictly more general than Theorem 2.3, however the proof of Theorem 2.3 uses less heavy machinery and is illuminating enough that we feel merits being presented as a separate theorem. 4In some sense, it is possible to give provable bounds for Bethe-entropy based relaxations, via analyzing belief propagation directly, which has been done in cases where there is correlation decay and the graph is locally tree-like. [WJ08] has a detailed overview of such results. 5More precisely, they prove that in the case when ∀i, j,∆|Ji,j | ≤ ∆n2 ∑ i,j |Ji,j |, one can get an additive ( ∑ i,j Ji,j) approximation to logZ in time n O( ∆ 2 ). 3 Approximate maximum entropy principles Let us recall what the problem we want to solve: Approximate maximum entropy principles We are given a positive-semidefinite matrix Σ ∈ Rn×n with Σi,i = 1,∀i ∈ [n], which is the covariance matrix of a centered distribution over {−1, 1}n, i.e. Eµ[xixj ] = Σi,j , Eµ[xi] = 0, for a distribution µ : {−1, 1}n → R. We wish to produce a distribution µ̃ : {−1, 1}n → R with pairwise covariances that match the given ones up to constant factors, and entropy within a constant factor of the maximum entropy distribution with covariance Σ. 6 Before stating the result formally, it will be useful to define the following constant: Definition 3.1. Define the constant G = mint∈[−1,1] { 2 π arcsin(t)/t } ≈ 0.64. We will prove the following main theorem: Theorem 3.1 (Main, approximate entropy principle). For any positive-semidefinite matrix Σ with Σi,i = 1,∀i, there is an efficiently sampleable distribution µ̃ : {−1, 1}n → R, which can be sampled as sign(g), where g ∼ N (0,Σ+βI), and satisfies G1+βΣi,j ≤ Eµ̃[xixj ] ≤ 1 1+βΣi,j and has entropy H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β , where β ≥ 1 31/2 . Note µ̃ is in fact very close to the the one which is classically used to round semidefinite relaxations for solving the MAX-CUT problem. [GW95] We will prove Theorem 3.1 in two parts – by first lower bounding the entropy of µ̃, and then by bounding the moments of µ̃. Theorem 3.2. The entropy of the distribution µ̃ satisfies H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β when β ≥ 1 31/2 . Proof. A sample g from N (0, Σ̃) can be produced by sampling g1 ∼ N (0,Σ), g2 ∼ N (0, βI) and setting g = g1+g2. The sum of two multivariate normals is again a multivariate normal. Furthermore, the mean of g is 0, and since g1, g2 are independent, the covariance of g is Σ + βI = Σ̃. Let’s denote the random variable Y = sign(g1 + g2) which is distributed according to µ̃. We wish to lower bound the entropy of Y. Toward that goal, denote the random variable S := {i ∈ [n] : |(g1)i| ≤ cD} for c,D to be chosen. Then, we have: for γ = c−1c , H(Y) ≥ H(Y|S) = ∑ S⊆[n] Pr[S = S]H(Y|S = S) ≥ ∑ S⊆[n],|S|≥γn Pr[S = S]H(Y|S = S) where the first inequality follows since conditioning doesn’t decrease entropy, and the latter by the non-negativity of entropy. Continue the calculation we can get:∑ S⊆[n],|S|≥γn Pr[S = S]H(Y|S = S) ≥ ∑ S⊆[n],|S|≥γn Pr[S = S] min S⊆[n],|S|≥γn H(Y|S = S) = Pr [|S| ≥ γn] min S⊆[n],|S|≥γn H(Y|S = S) We will lower bound Pr[|S| ≥ γn] first. Notice that E[ ∑n i=1(g1) 2 i ] = n, therefore by Markov’s inequality, Pr [ n∑ i=1 (g1) 2 i ≥ Dn ] ≤ 1 D . On the other hand, if ∑n i=1(g1) 2 i ≤ Dn, then |{i : (g1)2i ≥ cD}| ≤ nc , which means that |{i : (g1) 2 i ≤ cD}| ≥ n− nc = (c−1)n c = γn. Putting things together, this means Pr [|S| ≥ γn] ≥ 1− 1 D . It remains to lower bound minS⊆[n],|S|≥γnH(Y|S = S). For every S ⊆ [n], |S| ≥ γn, denote by YS the coordinates of Y restricted to S, we get H(Y|S = S) ≥ H(YS |S = S) ≥ H∞(YS |S = S) = − log(max yS Pr[YS = yS |S = S]) 6Note for a distribution over {−1, 1}n, the maximal entropy a distribution can have is n, which is achieved by the uniform distribution. (where H∞ is the min-entropy) so we only need to bound maxyS Pr[YS = yS |S = S] We will now, for any yS , upper bound Pr[YS = yS |S = S]. Recall that the event S = S implies that ∀i ∈ S, |(g1)i| ≤ cD. Since g2 is independent of g1, we know that for every fixed g ∈ Rn: Pr[YS = yS |S = S, g1 = g] = Πi∈S Pr[sign([g]i + [g2]i) = yi] For a fixed i ∈ [S], consider the term Pr[sign([g]i + [g2]i) = yi]. Without loss of generality, let’s assume [g]i > 0 (the proof is completely symmetric in the other case). Then, since [g]i is positive and g2 has mean 0, we have Pr[[g]i + (g2)i < 0] ≤ 1 2 . Moreover, Pr [[g]i + [g2]i > 0] = Pr[[g2]i > 0] Pr [[g]i + [g2]i > 0 | [g2]i > 0] + Pr[[g2]i < 0] Pr [[g]i + [g2]i > 0 | [g2]i < 0] The first term is upper bounded by 12 since Pr[[g2]i > 0] ≤ 1 2 . The second term we will bound using standard Gaussian tail bounds: Pr [[g]i + [g2]i > 0 | [g2]i < 0] ≤ Pr [|[g2]i| ≤ |[g]i| | [g2]i < 0] = Pr[|[g2]i| ≤ |[g]i|] ≤ Pr[([g2]i)2 ≤ cD] = 1− Pr[([g2]i)2 > cD] ≤ 1− 2√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 which implies Pr[[g2]i < 0] Pr[[g]i + [g2]i > 0 | [g2]i < 0] ≤ 1 2 ( 1− 2√ 2π exp (−cD/2β) (√ β cD − (√ β cD )3)) Putting together, we have Pr[sign((g1)i + (g2)i) = yi] ≤ 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 Together with the fact that |S| ≥ γn we get Pr[YS = yS |S = s, g1 = g] ≤ 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3γn which implies that H(Y) ≥ − ( 1− 1 D ) (c− 1)n c log 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 By setting c = D = 31/4 √ β and a straightforward (albeit unpleasant) calculation, we can check that H(Y) ≥ n25 (31/4 √ β−1)2√ 3β , as we need. We next show that the moments of the distribution are preserved up to a constant G1+β . Lemma 3.1. The distribution µ̃ has G1+βΣi,j ≤ Eµ̃[XiXj ] ≤ 1 1+βΣi,j Proof. Consider the Gram decomposition of Σ̃i,j = 〈vi, vj〉. Then, N (0, Σ̃) is in distribution equal to (sign(〈v1, s〉), . . . , sign(〈vn, s〉)) where s ∼ N (0, I). Similarly as in the analysis of GoemansWilliamson [GW95], if v̄i = 1‖vi‖vi, we have G〈v̄i, v̄j〉 ≤ Eµ̃[XiXj ] = 2 π arcsin(〈v̄i, v̄j〉) ≤ 〈v̄i, v̄j〉. However, since 〈v̄i, v̄j〉 = 1 ‖vi‖‖vj‖ 〈vi, vj〉 = 1 ‖vi‖‖vj‖ Σ̃i,j = 1 ‖vi‖‖vj‖ Σi,j and ‖vi‖ =√ Σ̃i,i = √ 1 + β,∀i ∈ [1, n], we get that G 1 + β Σi,j ≤ Eµ̃[XiXj ] ≤ 1 1 + β Σi,j as we want. Lemma 3.2 and 3.1 together imply Theorem 3.1. 4 Provable bounds for variational methods We will in this section consider applications of the approximate maximum entropy principles we developed for calculating partition functions of Ising models. Before we dive into the results, we give brief preliminaries on variational methods and pseudo-moment convex relaxations. Preliminaries on variational methods and pseudo-moment convex relaxations Recall, variational methods are based on the following simple lemma, which characterizes logZ as the solution of an optimization problem. It essentially dates back to Gibbs [Ell12], who used it in the context of statistical mechanics, though it has been rediscovered by machine learning researchers [WJ08]: Lemma 4.1 (Variational characterization of logZ). Let us denote byM the polytope of distributions over {−1, 1}n. Then, logZ = max µ∈M {∑ t JtEµ[φt(x)] +H(µ) } (1) While the above lemma reduces calculating logZ to an optimization problem, optimizing over the polytopeM is impossible in polynomial time. We will proceed in a way which is natural for optimization problems – by instead optimizing over a relaxationM′ of that polytope. The relaxation will be associated with the degree-2 Lasserre hierarchy. Intuitively, M′ has as variables tentative pairwise moments of a distribution of {−1, 1}n, and it imposes all constraints on the moments that hold for distributions over {−1, 1}n. To defineM′ more precisely we will need the following notion: (for a more in-depth review of moment-based convex hierarchies, the reader can consult [BKS14]) Definition 4.1. A degree-2 pseudo-moment 7 Ẽν [·] is a linear operator mapping polynomials of degree 2 to R, such that Ẽν [x2i ] = 1, and Ẽν [p(x)2] ≥ 0 for any polynomial p(x) of degree 1. We will be optimizing over the polytopeM′ of all degree-2 pseudo-moments, i.e. we will consider solving max Ẽν [·]∈M′ {∑ t JtẼν [φt(x)] + H̃(Ẽν [·]) } where H̃ will be a proxy for the entropy we will have to define (since entropy is a global property that depends on all moments, and Ẽν only contains information about second order moments). To see this optimization problem is convex, we show that it can easily be written as a semidefinite program. Namely, note that the pseudo-moment operators are linear, so it suffices to define them over monomials only. Hence, the variables will simply be Ẽν(xS) for all monomials xS of degree at most 2. The constraints Ẽν [x2i ] = 1 then are clearly linear, as is the “energy part” of the objective function. So we only need to worry about the constraint Ẽν [p(x)2] ≥ 0 and the entropy functional. We claim the constraint Ẽν [p(x)2] ≥ 0 can be written as a PSD constraint: namely if we define the matrix Q, which is indexed by all the monomials of degree at most 1, and it satisfies Q(xS ,xT ) = Ẽν [xSxT ]. It is easy to see that Ẽν [p(x)2] ≥ 0 ≡ Q 0. 7The reason Ẽν [·] is called a pseudo-moment, is that it behaves like the moments of a distribution ν : {−1, 1}n → [0, 1], albeit only over polynomials of degree at most 2. Hence, the final concern is how to write an expression for the entropy in terms of the low-order moments, since entropy is a global property that depends on all moments. There are many candidates for this in machine learning are like Bethe/Kikuchi entropy, tree-reweighted Bethe entropy, logdeterminant etc. However, in the worst case – none of them come with any guarantees. We will in fact show that the entropy functional is not an issue – we will relax the entropy trivially to n. Given all of this, the final relaxation we will consider is: max Ẽν [·]∈M′ {∑ t JtẼν [φt(x)] + n } (2) From the prior setup it is clear that the solution to 2 is an upper bound to logZ . To prove a claim like Theorem 2.3 or Theorem 2.4, we will then provide a rounding of the solution. In this instance, this will mean producing a distribution µ̃ which has the value of ∑ t JtEµ̃[φt(x)] +H(µ̃) comparable to the value of the solution. Note this is slightly different than the usual requirement in optimization, where one cares only about producing a single x ∈ {−1, 1}n with comparable value to the solution. Our distribution µ̃ will have entropy Ω(n), and preserves the “energy” portion of the objective∑ t JtEµ[φt(x)] up to a comparable factor to what is achievable in the optimization setting. Warmup: exponential family analogue of MAX-CUT As a warmup, to illustrate the basic ideas behind the above rounding strategy, before we consider Ising models we consider the exponential family analogue of MAX-CUT. It is defined by the functionals φi,j(x) = (xi − xj)2. Concretely, we wish to approximate the partition function of the distribution µ(x) ∝ exp ∑ i,j Ji,j(xi − xj)2 . We will prove the following simple observation: Observation 4.1. The relaxation 2 provides a factor 2 approximation of logZ . Proof. We proceed as outlined in the previous section, by providing a rounding of 2. We point out again, unlike the standard case in optimization, where typically one needs to produce an assignment of the variables, because of the entropy term here it is crucial that the rounding produces a distribution. The distribution µ̃ we produce here will be especially simple: we will round each xi independently with probability 12 . Then, clearly H(µ̃) = n. On the other hand, we similarly have Prµ̃[(xi− xj) 2 = 1] = 12 , since xi and xj are rounded independently. Hence, Eµ̃[(xi − xj) 2] ≥ 12 . Altogether, this implies ∑ i,j Ji,jEµ̃[(xi − xj)2] +H(µ̃) ≥ 1 2 (∑ i,j Ji,jEν [(xi − xj)2] + n ) as we needed. 4.1 Ising models We proceed with the main results of this section on Ising models, which is the case where φi,j(x) = xixj . We will split into the ferromagnetic and general case separately, as outlined in Section 2. To be concrete, we will be given potentials Ji,j , and we wish to calculate the partition function of the Ising model µ(x) ∝ exp( ∑ i,j Ji,jxixj). Ferromagnetic case Recall, in the ferromagnetic case of Ising model, we have the conditions that the potentials Ji,j > 0. We will provide a convex relaxation which has a constant factor approximation in this case. First, recall the famous first Griffiths inequality due to Griffiths [Gri67] which states that in the ferromagnetic case, Eµ[xixj ] ≥ 0,∀i, j. Using this inequality, we will look at the following natural strenghtening of the relaxation 2: max Ẽν [·]∈M′;Ẽν [xixj ]≥0,∀i,j {∑ t JtẼν [φt(x)] + n } (3) We will prove the following theorem, as a straightforward implication of our claims from Section 3: Theorem 4.1. The relaxation 3 provides a factor 50 approximation of logZ . Proof. Notice, due to Griffiths’ inequality, 3 is in fact a relaxation of the Gibbs variational principle and hence an upper bound)of logZ . Same as before, we will provide a rounding of 3. We will use the distribution µ̃ we designed in Section 3 the sign of a Gaussian with covariance matrix Σ + βI , for a β which we will specify. By Lemma 3.2, we then have H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β whenever β ≥ 1 31/2 . By Lemma 3.1, on the other hand, we can prove that Eµ̃[xixj ] ≥ G 1 + β Ẽν [xixj ] By setting β = 21.8202, we get n25 (31/4 √ β−1)2√ 3β ≥ 0.02 and G1+β ≥ 0.02, which implies that ∑ i,j Ji,jEµ̃[xixj ] +H(µ̃) ≥ 0.02 ∑ i,j Ji,jẼν [xixj ] + n which is what we need. Note that the above proof does not work in the general Ising model case: when Ẽν [xixj ] can be either positive or negative, even if we preserved each Ẽν [xixj ] up to a constant factor, this may not preserve the sum ∑ i,j Ji,jẼν [xixj ] due to cancellations in that expression. General Ising models case Finally, we will tackle the general Ising model case. As noted in the previous section, the straightforward application of the results proven in Section 3 doesn’t work, so we have to consider a different rounding – again inspired by roundings used in optimization. The intuition is the same as in the ferromagnetic case: we wish to design a rounding which preserves the “energy” portion of the objective, while having a high entropy. In the previous section, this was achieved by modifying the Goemans-Williamson rounding so that it produces a high-entropy distribution. We will do a similar thing here, by modifying rounding due to [CW04] and [AMMN06]. The convex relaxation we will consider will just be the basic one: 2 and we will prove the following two theorems: Theorem 4.2. The relaxation 2 provides a factor O(log n) approximation to logZ when φi,j(x) = xixj . Theorem 4.3. The relaxation 2 provides a factor O(log(χ(G))) approximation to logZ when φi,j(x) = xixj for i, j ∈ E(G) of some graph G = (V (G), E(G)), and χ(G) is the chromatic number of G. Since the chromatic number of a graph is bounded by n, the second theorem is in fact strictly stronger than the first, however the proof of the first theorem uses less heavy machinery, and is illuminating enough to be presented on its own. Due to space constraints, the proofs of these theorems are forwarded to the appendix. 5 Conclusion In summary, we presented computationally efficient approximate versions of the classical maxentropy principle by [Jay57]: efficiently sampleable distributions which preserve given pairwise moments up to a multiplicative constant factor, while having entropy within a constant factor of the maximum entropy distribution matching those moments. Additionally, we applied our insights to designing provable variational methods for Ising models which provide comparable guarantees for approximating the log-partition function to those in the optimization setting. Our methods are based on convex relaxations of the standard variational principle due to Gibbs, and are extremely generic and we hope they will find applications for other exponential families.
1. What is the main contribution of the paper regarding variational inference in Ising models? 2. How does the proposed approach differ from previous works, particularly Ris16? 3. Are there any conjectures on the best possible approximation bounds for these problems? 4. Can you provide further clarification on minor points mentioned in the review?
Review
Review Considering Ising models (with no local fields), a very interesting twist on the standard variational inference approach is developed to show a convex relaxation and rounding that yields a multiplicative factor approximation to log Z. The factor is O(log chi(G)) where chi(G) is the chromatic number of the model graph. For fully attractive (ferromagnetic) models, this improves to a constant factor of 50.The paper is clear and well written with helpful background and remarks [I have not checked the proofs in detail but the ideas are presented clearly and I trust that the details are correct]. The methods used and results obtained are very interesting. To my knowledge they are novel and may prove useful in other work. Note that a multiplicative bound on Z (hence an additive bound on log Z) would be much stronger and more useful, but as noted in the paper, this is likely impossible. It is not clear how useful these results will be in the near term in practice, but still they are theoretically important and provide good leads for future work. The authors focus on `symmetric' Ising models with no local fields/singleton potentials, i.e. x_i \in {-1,+1} and E(x_i)=0 for all i. It is worth noting that if an Ising model does have local fields, it can always be transformed into a larger model without any local fields by adding one extra variable and encoding the original singleton potentials as edge potentials to the added variable. The larger model has exactly twice the partition function of the original model, e.g. see Weller ICML 2016, Uprooting and Rerooting Graphical Models. Further observations on the following would be welcome: A longer comment on the differences between the analysis here and in Ris16 (so expand on lines 84-88 and 105-108). Are there any results/conjectures on what might be the best possible approximation bounds for these problems, so we can see how close these results are? Minor points: Section 2 is very helpful as a clear statement of the main results and prior related work. It would be good to tie it to the later text more neatly: Definition 3.1 of the constant G should come before it is first used in Theorem 2.1. Clarify where exactly Theorems 2.1-2.4 are proved. line 32: a -> an 33: from from -> than 35-38: very interesting, in 37: delete one 'there' 41: The -> A (there are other important reasons to compute Z) 45-46: Perhaps mention that we know that the Bethe partition function estimate lower bounds the true value (i.e. Z_B \leq Z) for ferromagnetic models, see Ruozzi NIPS 2012, The Bethe partition function of log-supermodular graphical models, and Weller and Jebara NIPS 2014, Clamping variables and approximate inference. 50: If the configuration with max score is not unique, then log Z -> the max * the multiplicity of the max 57: Perhaps mention that better results can be obtained for various restrictions such as bounded treewidth (Wainwright and Jordan 2004, Treewidth-based conditions for exactness of the Sherali-Adams and Lasserre relaxations) or if certain minors are not present (Weller UAI 2016, Characterizing Tightness of LP Relaxations by Forbidding Signed Minors). 104: Perhaps move the footnote 3 to the end of 103 to save space. 118: theoretically much more poorly studied - not sure if that's true 121: delete 'what' 238: add space between bound) and of. Delete Same. 253: Insert 'a' before rounding 254, 256, 258 should the 2 be (2) as in \eqref References In several places, caps should be fixed, e.g. Grothendieck -> {G}rothendieck. I believe [BB] should be [BB08] from NIPS 2008.
NIPS
Title Approximate maximum entropy principles via Goemans-Williamson with applications to provable variational methods Abstract The well known maximum-entropy principle due to Jaynes, which states that given mean parameters, the maximum entropy distribution matching them is in an exponential family has been very popular in machine learning due to its “Occam’s razor” interpretation. Unfortunately, calculating the potentials in the maximumentropy distribution is intractable [BGS14]. We provide computationally efficient versions of this principle when the mean parameters are pairwise moments: we design distributions that approximately match given pairwise moments, while having entropy which is comparable to the maximum entropy distribution matching those moments. We additionally provide surprising applications of the approximate maximum entropy principle to designing provable variational methods for partition function calculations for Ising models without any assumptions on the potentials of the model. More precisely, we show that we can get approximation guarantees for the log-partition function comparable to those in the low-temperature limit, which is the setting of optimization of quadratic forms over the hypercube. ([AN06]) 1 Introduction Maximum entropy principle The maximum entropy principle [Jay57] states that given mean parameters, i.e. Eµ[φt(x)] for a family of functionals φt(x), t ∈ [1, T ], where µ is distribution over the hypercube {−1, 1}n, the entropy-maximizing distribution µ is an exponential family distribution, i.e. µ(x) ∝ exp( ∑T t=1 Jtφt(x)) for some potentials Jt, t ∈ [1, T ]. 1 This principle has been one of the reasons for the popularity of graphical models in machine learning: the “maximum entropy” assumption is interpreted as “minimal assumptions” on the distribution other than what is known about it. However, this principle is problematic from a computational point of view. Due to results of [BGS14, SV14], the potentials Jt of the Ising model, in many cases, are impossible to estimate well in polynomial time, unless NP = RP – so merely getting the description of the maximum entropy distribution is already hard. Moreover, in order to extract useful information about this distribution, usually we would also like to at least be able to sample efficiently from this distribution – which is typically NP-hard or even #P-hard. 1There is a more general way to state this principle over an arbitrary domain, not just the hypercube, but for clarity in this paper we will focus on the hypercube only. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. In this paper we address this problem in certain cases. We provide a “bi-criteria” approximation for the special case where the functionals φt(x) are φi,j(x) = xixj , i.e. pairwise moments: we produce a efficiently sampleable distribution over the hypercube which matches these moments up to multiplicative constant factors, and has entropy at most a constant factor smaller from from the entropy of the maximum entropy distribution. 2 Furthermore, the distribution which achieves this is very natural: the sign of a multivariate normal variable. This provides theoretical explanation for the phenomenon observed by the computational neuroscience community [BB07] that this distribution (there named dichotomized Gaussian there) has near-maximum entropy. Variational methods The above results also allow us to get results for a seemingly unrelated problem – approximating the partition function Z = ∑ x∈{−1,1}n exp( ∑T t=1 Jtφt(x)) of a member of an exponential family. The reason this task is important is that it is tied to calculating marginals. One of the ways this task is solved is variational methods: namely, expressing logZ as an optimization problem. While there is a plethora of work on variational methods, of many flavors (mean field, Bethe/Kikuchi relaxations, TRBP, etc. for a survey, see [WJ08]), they typically come either with no guarantees, or with guarantees in very constrained cases (e.g. loopless graphs; graphs with large girth, etc. [WJW03, WJW05]). While this is a rich area of research, the following extremely basic research question has not been answered: What is the best approximation guarantee on the partition function in the worst case (with no additional assumptions on the potentials)? In the low-temperature limit, i.e. when |Jt| → ∞, logZ → maxx∈{−1,1}n ∑T t=1 Jtφt(x) - i.e. the question reduces to purely optimization. In this regime, this question has very satisfying answers for many families φt(x). One classical example is when the functionals are φi,j(x) = xixj . In the graphical model community, these are known as Ising models, and in the optimization community this is the problem of optimizing quadratic forms and has been studied by [CW04, AN06, AMMN06]. In the optimization version, the previous papers showed that in the worst case, one can get O(log n) factor multiplicative factor approximation of it, and that unless P = NP, one cannot get better than constant factor approximations of it. In the finite-temperature version, it is known that it is NP-hard to achieve a 1 + factor approximation to the partition function (i.e. construct a FPRAS) [SS12], but nothing is known about coarser approximations. We prove in this paper, informally, that one can get comparable multiplicative guarantees on the log-partition function in the finite temperature case as well – using the tools and insights we develop on the maximum entropy principles. Our methods are extremely generic, and likely to apply to many other exponential families, where algorithms based on linear/semidefinite programming relaxations are known to give good guarantees in the optimization regime. 2 Statements of results and prior work Approximate maximum entropy The main theorem in this section is the following one. Theorem 2.1. For any covariance matrix Σ of a centered distribution µ : {−1, 1}n → R, i.e. Eµ[xixj ] = Σi,j , Eµ[xi] = 0, there is an efficiently sampleable distribution µ̃, which can be sampled as sign(g), where g ∼ N (0,Σ + βI) and satisfies G 1 + β Σi,j ≤ Eµ̃[XiXj ] ≤ 1 1 + β Σi,j and has entropy H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β , for any β ≥ 1 31/2 . There are two prior works on computational issues relating to maximum entropy principles, both proving hardness results. [BGS14] considers the “hard-core” model where the functionals φt are such that the distribution µ(x) puts zero mass on configurations x which are not independent sets with respect to some graph G. 2In fact, we produce a distribution with entropy Ω(n), which implies the latter claim since the maximum entropy of any distribution of over {−1, 1}n is at most n They show that unless NP = RP, there is no FPRAS for calculating the potentials Jt, given the mean parameters Eµ[φt(x)]. [SV14] prove an equivalence between calculating the mean parameters and calculating partition functions. More precisely, they show that given an oracle that can calculate the mean parameters up to a (1 + ) multiplicative factor in time O(poly(1/ )), one can calculate the partition function of the same exponential family up to (1 +O(poly( ))) multiplicative factor, in time O(poly(1/ )). Note, the in this work potentially needs to be polynomially small in n (i.e. an oracle that can calculate the mean parameters to a fixed multiplicative constant cannot be used.) Both results prove hardness for fine-grained approximations to the maximum entropy principle, and ask for outputting approximations to the mean parameters. Our result circumvents these hardness results by providing a distribution which is not in the maximum-entropy exponential family, and is allowed to only approximately match the moments as well. To the best of our knowledge, such an approximation, while very natural, has not been considered in the literature. Provable variational methods The main theorems in this section will concern the approximation factor that can be achieved by degree-2 pseudo-moment relaxations of the standard variational principle due to Gibbs. ([Ell12]) As outlined before, we will be concerned with a particularly popular exponential family: Ising models. We will prove the following three results: Theorem 2.2 (Ferromagnetic Ising, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor 50 the value of logZ where Z is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j Ji,jxixj) for Ji,j > 0. Theorem 2.3 (Ising model, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor O(log n) the value of logZ where Z is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j Ji,jxixj). Theorem 2.4 (Ising model, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor O(logχ(G)) the value of logZ whereZ is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j∈E(G) Ji,jxixj) where G = (V (G), E(G)) is a graph with chromatic number χ(G). 3 While a lot of work is done on variational methods in general (see the survey by [WJ08] for a detailed overview), to the best of our knowledge nothing is known about the worst-case guarantee that we are interested in here. Moreover, other than a recent paper by [Ris16], no other work has provided provable bounds for variational methods that proceed via a convex relaxation and a rounding thereof.4 [Ris16] provides guarantees in the case of Ising models that are also based on pseudo-moment relaxations of the variational principle, albeit only in the special case when the graph is “dense” in a suitably defined sense. 5 The results there are very specific to the density assumption and can not be adapted to our worst-case setting. Finally, we mention that in the special case of the ferromagnetic Ising models, an algorithm based on MCMC was provided by [JS93], which can give an approximation factor of (1 + ) to the partition function and runs in time O(n11poly(1/ )). In spite of this, the focus of this part of our paper is to provide understanding of variational methods in certain cases, as they continue to be popular in practice for their faster running time compared to MCMC-based methods but are theoretically much more poorly studied. 3Theorem 2.4 is strictly more general than Theorem 2.3, however the proof of Theorem 2.3 uses less heavy machinery and is illuminating enough that we feel merits being presented as a separate theorem. 4In some sense, it is possible to give provable bounds for Bethe-entropy based relaxations, via analyzing belief propagation directly, which has been done in cases where there is correlation decay and the graph is locally tree-like. [WJ08] has a detailed overview of such results. 5More precisely, they prove that in the case when ∀i, j,∆|Ji,j | ≤ ∆n2 ∑ i,j |Ji,j |, one can get an additive ( ∑ i,j Ji,j) approximation to logZ in time n O( ∆ 2 ). 3 Approximate maximum entropy principles Let us recall what the problem we want to solve: Approximate maximum entropy principles We are given a positive-semidefinite matrix Σ ∈ Rn×n with Σi,i = 1,∀i ∈ [n], which is the covariance matrix of a centered distribution over {−1, 1}n, i.e. Eµ[xixj ] = Σi,j , Eµ[xi] = 0, for a distribution µ : {−1, 1}n → R. We wish to produce a distribution µ̃ : {−1, 1}n → R with pairwise covariances that match the given ones up to constant factors, and entropy within a constant factor of the maximum entropy distribution with covariance Σ. 6 Before stating the result formally, it will be useful to define the following constant: Definition 3.1. Define the constant G = mint∈[−1,1] { 2 π arcsin(t)/t } ≈ 0.64. We will prove the following main theorem: Theorem 3.1 (Main, approximate entropy principle). For any positive-semidefinite matrix Σ with Σi,i = 1,∀i, there is an efficiently sampleable distribution µ̃ : {−1, 1}n → R, which can be sampled as sign(g), where g ∼ N (0,Σ+βI), and satisfies G1+βΣi,j ≤ Eµ̃[xixj ] ≤ 1 1+βΣi,j and has entropy H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β , where β ≥ 1 31/2 . Note µ̃ is in fact very close to the the one which is classically used to round semidefinite relaxations for solving the MAX-CUT problem. [GW95] We will prove Theorem 3.1 in two parts – by first lower bounding the entropy of µ̃, and then by bounding the moments of µ̃. Theorem 3.2. The entropy of the distribution µ̃ satisfies H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β when β ≥ 1 31/2 . Proof. A sample g from N (0, Σ̃) can be produced by sampling g1 ∼ N (0,Σ), g2 ∼ N (0, βI) and setting g = g1+g2. The sum of two multivariate normals is again a multivariate normal. Furthermore, the mean of g is 0, and since g1, g2 are independent, the covariance of g is Σ + βI = Σ̃. Let’s denote the random variable Y = sign(g1 + g2) which is distributed according to µ̃. We wish to lower bound the entropy of Y. Toward that goal, denote the random variable S := {i ∈ [n] : |(g1)i| ≤ cD} for c,D to be chosen. Then, we have: for γ = c−1c , H(Y) ≥ H(Y|S) = ∑ S⊆[n] Pr[S = S]H(Y|S = S) ≥ ∑ S⊆[n],|S|≥γn Pr[S = S]H(Y|S = S) where the first inequality follows since conditioning doesn’t decrease entropy, and the latter by the non-negativity of entropy. Continue the calculation we can get:∑ S⊆[n],|S|≥γn Pr[S = S]H(Y|S = S) ≥ ∑ S⊆[n],|S|≥γn Pr[S = S] min S⊆[n],|S|≥γn H(Y|S = S) = Pr [|S| ≥ γn] min S⊆[n],|S|≥γn H(Y|S = S) We will lower bound Pr[|S| ≥ γn] first. Notice that E[ ∑n i=1(g1) 2 i ] = n, therefore by Markov’s inequality, Pr [ n∑ i=1 (g1) 2 i ≥ Dn ] ≤ 1 D . On the other hand, if ∑n i=1(g1) 2 i ≤ Dn, then |{i : (g1)2i ≥ cD}| ≤ nc , which means that |{i : (g1) 2 i ≤ cD}| ≥ n− nc = (c−1)n c = γn. Putting things together, this means Pr [|S| ≥ γn] ≥ 1− 1 D . It remains to lower bound minS⊆[n],|S|≥γnH(Y|S = S). For every S ⊆ [n], |S| ≥ γn, denote by YS the coordinates of Y restricted to S, we get H(Y|S = S) ≥ H(YS |S = S) ≥ H∞(YS |S = S) = − log(max yS Pr[YS = yS |S = S]) 6Note for a distribution over {−1, 1}n, the maximal entropy a distribution can have is n, which is achieved by the uniform distribution. (where H∞ is the min-entropy) so we only need to bound maxyS Pr[YS = yS |S = S] We will now, for any yS , upper bound Pr[YS = yS |S = S]. Recall that the event S = S implies that ∀i ∈ S, |(g1)i| ≤ cD. Since g2 is independent of g1, we know that for every fixed g ∈ Rn: Pr[YS = yS |S = S, g1 = g] = Πi∈S Pr[sign([g]i + [g2]i) = yi] For a fixed i ∈ [S], consider the term Pr[sign([g]i + [g2]i) = yi]. Without loss of generality, let’s assume [g]i > 0 (the proof is completely symmetric in the other case). Then, since [g]i is positive and g2 has mean 0, we have Pr[[g]i + (g2)i < 0] ≤ 1 2 . Moreover, Pr [[g]i + [g2]i > 0] = Pr[[g2]i > 0] Pr [[g]i + [g2]i > 0 | [g2]i > 0] + Pr[[g2]i < 0] Pr [[g]i + [g2]i > 0 | [g2]i < 0] The first term is upper bounded by 12 since Pr[[g2]i > 0] ≤ 1 2 . The second term we will bound using standard Gaussian tail bounds: Pr [[g]i + [g2]i > 0 | [g2]i < 0] ≤ Pr [|[g2]i| ≤ |[g]i| | [g2]i < 0] = Pr[|[g2]i| ≤ |[g]i|] ≤ Pr[([g2]i)2 ≤ cD] = 1− Pr[([g2]i)2 > cD] ≤ 1− 2√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 which implies Pr[[g2]i < 0] Pr[[g]i + [g2]i > 0 | [g2]i < 0] ≤ 1 2 ( 1− 2√ 2π exp (−cD/2β) (√ β cD − (√ β cD )3)) Putting together, we have Pr[sign((g1)i + (g2)i) = yi] ≤ 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 Together with the fact that |S| ≥ γn we get Pr[YS = yS |S = s, g1 = g] ≤ 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3γn which implies that H(Y) ≥ − ( 1− 1 D ) (c− 1)n c log 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 By setting c = D = 31/4 √ β and a straightforward (albeit unpleasant) calculation, we can check that H(Y) ≥ n25 (31/4 √ β−1)2√ 3β , as we need. We next show that the moments of the distribution are preserved up to a constant G1+β . Lemma 3.1. The distribution µ̃ has G1+βΣi,j ≤ Eµ̃[XiXj ] ≤ 1 1+βΣi,j Proof. Consider the Gram decomposition of Σ̃i,j = 〈vi, vj〉. Then, N (0, Σ̃) is in distribution equal to (sign(〈v1, s〉), . . . , sign(〈vn, s〉)) where s ∼ N (0, I). Similarly as in the analysis of GoemansWilliamson [GW95], if v̄i = 1‖vi‖vi, we have G〈v̄i, v̄j〉 ≤ Eµ̃[XiXj ] = 2 π arcsin(〈v̄i, v̄j〉) ≤ 〈v̄i, v̄j〉. However, since 〈v̄i, v̄j〉 = 1 ‖vi‖‖vj‖ 〈vi, vj〉 = 1 ‖vi‖‖vj‖ Σ̃i,j = 1 ‖vi‖‖vj‖ Σi,j and ‖vi‖ =√ Σ̃i,i = √ 1 + β,∀i ∈ [1, n], we get that G 1 + β Σi,j ≤ Eµ̃[XiXj ] ≤ 1 1 + β Σi,j as we want. Lemma 3.2 and 3.1 together imply Theorem 3.1. 4 Provable bounds for variational methods We will in this section consider applications of the approximate maximum entropy principles we developed for calculating partition functions of Ising models. Before we dive into the results, we give brief preliminaries on variational methods and pseudo-moment convex relaxations. Preliminaries on variational methods and pseudo-moment convex relaxations Recall, variational methods are based on the following simple lemma, which characterizes logZ as the solution of an optimization problem. It essentially dates back to Gibbs [Ell12], who used it in the context of statistical mechanics, though it has been rediscovered by machine learning researchers [WJ08]: Lemma 4.1 (Variational characterization of logZ). Let us denote byM the polytope of distributions over {−1, 1}n. Then, logZ = max µ∈M {∑ t JtEµ[φt(x)] +H(µ) } (1) While the above lemma reduces calculating logZ to an optimization problem, optimizing over the polytopeM is impossible in polynomial time. We will proceed in a way which is natural for optimization problems – by instead optimizing over a relaxationM′ of that polytope. The relaxation will be associated with the degree-2 Lasserre hierarchy. Intuitively, M′ has as variables tentative pairwise moments of a distribution of {−1, 1}n, and it imposes all constraints on the moments that hold for distributions over {−1, 1}n. To defineM′ more precisely we will need the following notion: (for a more in-depth review of moment-based convex hierarchies, the reader can consult [BKS14]) Definition 4.1. A degree-2 pseudo-moment 7 Ẽν [·] is a linear operator mapping polynomials of degree 2 to R, such that Ẽν [x2i ] = 1, and Ẽν [p(x)2] ≥ 0 for any polynomial p(x) of degree 1. We will be optimizing over the polytopeM′ of all degree-2 pseudo-moments, i.e. we will consider solving max Ẽν [·]∈M′ {∑ t JtẼν [φt(x)] + H̃(Ẽν [·]) } where H̃ will be a proxy for the entropy we will have to define (since entropy is a global property that depends on all moments, and Ẽν only contains information about second order moments). To see this optimization problem is convex, we show that it can easily be written as a semidefinite program. Namely, note that the pseudo-moment operators are linear, so it suffices to define them over monomials only. Hence, the variables will simply be Ẽν(xS) for all monomials xS of degree at most 2. The constraints Ẽν [x2i ] = 1 then are clearly linear, as is the “energy part” of the objective function. So we only need to worry about the constraint Ẽν [p(x)2] ≥ 0 and the entropy functional. We claim the constraint Ẽν [p(x)2] ≥ 0 can be written as a PSD constraint: namely if we define the matrix Q, which is indexed by all the monomials of degree at most 1, and it satisfies Q(xS ,xT ) = Ẽν [xSxT ]. It is easy to see that Ẽν [p(x)2] ≥ 0 ≡ Q 0. 7The reason Ẽν [·] is called a pseudo-moment, is that it behaves like the moments of a distribution ν : {−1, 1}n → [0, 1], albeit only over polynomials of degree at most 2. Hence, the final concern is how to write an expression for the entropy in terms of the low-order moments, since entropy is a global property that depends on all moments. There are many candidates for this in machine learning are like Bethe/Kikuchi entropy, tree-reweighted Bethe entropy, logdeterminant etc. However, in the worst case – none of them come with any guarantees. We will in fact show that the entropy functional is not an issue – we will relax the entropy trivially to n. Given all of this, the final relaxation we will consider is: max Ẽν [·]∈M′ {∑ t JtẼν [φt(x)] + n } (2) From the prior setup it is clear that the solution to 2 is an upper bound to logZ . To prove a claim like Theorem 2.3 or Theorem 2.4, we will then provide a rounding of the solution. In this instance, this will mean producing a distribution µ̃ which has the value of ∑ t JtEµ̃[φt(x)] +H(µ̃) comparable to the value of the solution. Note this is slightly different than the usual requirement in optimization, where one cares only about producing a single x ∈ {−1, 1}n with comparable value to the solution. Our distribution µ̃ will have entropy Ω(n), and preserves the “energy” portion of the objective∑ t JtEµ[φt(x)] up to a comparable factor to what is achievable in the optimization setting. Warmup: exponential family analogue of MAX-CUT As a warmup, to illustrate the basic ideas behind the above rounding strategy, before we consider Ising models we consider the exponential family analogue of MAX-CUT. It is defined by the functionals φi,j(x) = (xi − xj)2. Concretely, we wish to approximate the partition function of the distribution µ(x) ∝ exp ∑ i,j Ji,j(xi − xj)2 . We will prove the following simple observation: Observation 4.1. The relaxation 2 provides a factor 2 approximation of logZ . Proof. We proceed as outlined in the previous section, by providing a rounding of 2. We point out again, unlike the standard case in optimization, where typically one needs to produce an assignment of the variables, because of the entropy term here it is crucial that the rounding produces a distribution. The distribution µ̃ we produce here will be especially simple: we will round each xi independently with probability 12 . Then, clearly H(µ̃) = n. On the other hand, we similarly have Prµ̃[(xi− xj) 2 = 1] = 12 , since xi and xj are rounded independently. Hence, Eµ̃[(xi − xj) 2] ≥ 12 . Altogether, this implies ∑ i,j Ji,jEµ̃[(xi − xj)2] +H(µ̃) ≥ 1 2 (∑ i,j Ji,jEν [(xi − xj)2] + n ) as we needed. 4.1 Ising models We proceed with the main results of this section on Ising models, which is the case where φi,j(x) = xixj . We will split into the ferromagnetic and general case separately, as outlined in Section 2. To be concrete, we will be given potentials Ji,j , and we wish to calculate the partition function of the Ising model µ(x) ∝ exp( ∑ i,j Ji,jxixj). Ferromagnetic case Recall, in the ferromagnetic case of Ising model, we have the conditions that the potentials Ji,j > 0. We will provide a convex relaxation which has a constant factor approximation in this case. First, recall the famous first Griffiths inequality due to Griffiths [Gri67] which states that in the ferromagnetic case, Eµ[xixj ] ≥ 0,∀i, j. Using this inequality, we will look at the following natural strenghtening of the relaxation 2: max Ẽν [·]∈M′;Ẽν [xixj ]≥0,∀i,j {∑ t JtẼν [φt(x)] + n } (3) We will prove the following theorem, as a straightforward implication of our claims from Section 3: Theorem 4.1. The relaxation 3 provides a factor 50 approximation of logZ . Proof. Notice, due to Griffiths’ inequality, 3 is in fact a relaxation of the Gibbs variational principle and hence an upper bound)of logZ . Same as before, we will provide a rounding of 3. We will use the distribution µ̃ we designed in Section 3 the sign of a Gaussian with covariance matrix Σ + βI , for a β which we will specify. By Lemma 3.2, we then have H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β whenever β ≥ 1 31/2 . By Lemma 3.1, on the other hand, we can prove that Eµ̃[xixj ] ≥ G 1 + β Ẽν [xixj ] By setting β = 21.8202, we get n25 (31/4 √ β−1)2√ 3β ≥ 0.02 and G1+β ≥ 0.02, which implies that ∑ i,j Ji,jEµ̃[xixj ] +H(µ̃) ≥ 0.02 ∑ i,j Ji,jẼν [xixj ] + n which is what we need. Note that the above proof does not work in the general Ising model case: when Ẽν [xixj ] can be either positive or negative, even if we preserved each Ẽν [xixj ] up to a constant factor, this may not preserve the sum ∑ i,j Ji,jẼν [xixj ] due to cancellations in that expression. General Ising models case Finally, we will tackle the general Ising model case. As noted in the previous section, the straightforward application of the results proven in Section 3 doesn’t work, so we have to consider a different rounding – again inspired by roundings used in optimization. The intuition is the same as in the ferromagnetic case: we wish to design a rounding which preserves the “energy” portion of the objective, while having a high entropy. In the previous section, this was achieved by modifying the Goemans-Williamson rounding so that it produces a high-entropy distribution. We will do a similar thing here, by modifying rounding due to [CW04] and [AMMN06]. The convex relaxation we will consider will just be the basic one: 2 and we will prove the following two theorems: Theorem 4.2. The relaxation 2 provides a factor O(log n) approximation to logZ when φi,j(x) = xixj . Theorem 4.3. The relaxation 2 provides a factor O(log(χ(G))) approximation to logZ when φi,j(x) = xixj for i, j ∈ E(G) of some graph G = (V (G), E(G)), and χ(G) is the chromatic number of G. Since the chromatic number of a graph is bounded by n, the second theorem is in fact strictly stronger than the first, however the proof of the first theorem uses less heavy machinery, and is illuminating enough to be presented on its own. Due to space constraints, the proofs of these theorems are forwarded to the appendix. 5 Conclusion In summary, we presented computationally efficient approximate versions of the classical maxentropy principle by [Jay57]: efficiently sampleable distributions which preserve given pairwise moments up to a multiplicative constant factor, while having entropy within a constant factor of the maximum entropy distribution matching those moments. Additionally, we applied our insights to designing provable variational methods for Ising models which provide comparable guarantees for approximating the log-partition function to those in the optimization setting. Our methods are based on convex relaxations of the standard variational principle due to Gibbs, and are extremely generic and we hope they will find applications for other exponential families.
1. What is the focus of the paper, and what are the key contributions of the proposed approach? 2. What are the strengths and weaknesses of the paper regarding its assumptions, applications, and previous works? 3. How does the reviewer assess the significance of the presented results, particularly in terms of their usefulness and practicality? 4. Are there any concerns or questions about the proof and explanation of the main theorem and other results? 5. What is the reviewer's opinion on the presentation, language, and organization of the paper? 6. Do the provided theorems and convex programming relaxations offer novel solutions to the problem, and how do they compare to existing methods? 7. Are there any missing elements or aspects that the reviewer would like to see added or discussed in the paper?
Review
Review The paper presents an approach to approximate maximum entropy principles with applications to estimating mean parameters along with partition functions for Ising models. Along with a review of previous well-known maximum entropy and variational methods, they present an approximate distribution that is not entropy maximizing, and thus does not require hardness proofs nor assumptions on the potential functions. The majority of the paper is dedicated to the proof of Theorem 3.1, stating that an efficiently sampleable distribution exists for a PSD covariance matrix with a specific minimum entropy, and to the description of 3 additional theorems stating that a convex programming relaxation exists for estimating the log-partition function up to specific multiplicative factors.While the paper has no fatal flaws, there appear to be many questions that a reader may have while understanding their work. The most obvious question is the usefulness in application of the ideas presented. While there results are somewhat interesting because of the reduction in assumptions, in almost all applications of the Ising model as presented those assumptions described by [WJ08] and by others are completely reasonable, and lead to theoretical results much stronger than those presented here. In the case of the Ferromagnetic Ising, the authors themselves refer to the work by [JS93] which gives a poly-time approximate MCMC algorithm. The main result describing the efficient sampling distribution is definitely the highlight of the work, but most of the paper is dedicated to its proof rather than its ramifications. The section on variational bounds for the other three results are also dedicated to the explanation and proofs of the theorems (the last is allocated to the supplement). Again, it is hard for a reader to understand the application of these results. How can approximating the log-parition function up to a multiplicative factor of 50 be useful? In the supplement they outline a particular novel rounding algorithm which allows for their convex programming claim, but there is no mention of this in the paper. Though the paper could also use some significant clarity in language and presentation, the most concerning issue for this reviewer is the lack of application analysis and experiments. They provide theorems describing convex programming relaxations to generally intractable problems and do not present experimental results demonstrating the applicability of their methods. The problem of variational methods is extremely well-studied, particularly with Ising models. To provide a novel relaxation without showing the application significantly reduces the potential impact on a reader, even if the theory may be well supported.
NIPS
Title Approximate maximum entropy principles via Goemans-Williamson with applications to provable variational methods Abstract The well known maximum-entropy principle due to Jaynes, which states that given mean parameters, the maximum entropy distribution matching them is in an exponential family has been very popular in machine learning due to its “Occam’s razor” interpretation. Unfortunately, calculating the potentials in the maximumentropy distribution is intractable [BGS14]. We provide computationally efficient versions of this principle when the mean parameters are pairwise moments: we design distributions that approximately match given pairwise moments, while having entropy which is comparable to the maximum entropy distribution matching those moments. We additionally provide surprising applications of the approximate maximum entropy principle to designing provable variational methods for partition function calculations for Ising models without any assumptions on the potentials of the model. More precisely, we show that we can get approximation guarantees for the log-partition function comparable to those in the low-temperature limit, which is the setting of optimization of quadratic forms over the hypercube. ([AN06]) 1 Introduction Maximum entropy principle The maximum entropy principle [Jay57] states that given mean parameters, i.e. Eµ[φt(x)] for a family of functionals φt(x), t ∈ [1, T ], where µ is distribution over the hypercube {−1, 1}n, the entropy-maximizing distribution µ is an exponential family distribution, i.e. µ(x) ∝ exp( ∑T t=1 Jtφt(x)) for some potentials Jt, t ∈ [1, T ]. 1 This principle has been one of the reasons for the popularity of graphical models in machine learning: the “maximum entropy” assumption is interpreted as “minimal assumptions” on the distribution other than what is known about it. However, this principle is problematic from a computational point of view. Due to results of [BGS14, SV14], the potentials Jt of the Ising model, in many cases, are impossible to estimate well in polynomial time, unless NP = RP – so merely getting the description of the maximum entropy distribution is already hard. Moreover, in order to extract useful information about this distribution, usually we would also like to at least be able to sample efficiently from this distribution – which is typically NP-hard or even #P-hard. 1There is a more general way to state this principle over an arbitrary domain, not just the hypercube, but for clarity in this paper we will focus on the hypercube only. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. In this paper we address this problem in certain cases. We provide a “bi-criteria” approximation for the special case where the functionals φt(x) are φi,j(x) = xixj , i.e. pairwise moments: we produce a efficiently sampleable distribution over the hypercube which matches these moments up to multiplicative constant factors, and has entropy at most a constant factor smaller from from the entropy of the maximum entropy distribution. 2 Furthermore, the distribution which achieves this is very natural: the sign of a multivariate normal variable. This provides theoretical explanation for the phenomenon observed by the computational neuroscience community [BB07] that this distribution (there named dichotomized Gaussian there) has near-maximum entropy. Variational methods The above results also allow us to get results for a seemingly unrelated problem – approximating the partition function Z = ∑ x∈{−1,1}n exp( ∑T t=1 Jtφt(x)) of a member of an exponential family. The reason this task is important is that it is tied to calculating marginals. One of the ways this task is solved is variational methods: namely, expressing logZ as an optimization problem. While there is a plethora of work on variational methods, of many flavors (mean field, Bethe/Kikuchi relaxations, TRBP, etc. for a survey, see [WJ08]), they typically come either with no guarantees, or with guarantees in very constrained cases (e.g. loopless graphs; graphs with large girth, etc. [WJW03, WJW05]). While this is a rich area of research, the following extremely basic research question has not been answered: What is the best approximation guarantee on the partition function in the worst case (with no additional assumptions on the potentials)? In the low-temperature limit, i.e. when |Jt| → ∞, logZ → maxx∈{−1,1}n ∑T t=1 Jtφt(x) - i.e. the question reduces to purely optimization. In this regime, this question has very satisfying answers for many families φt(x). One classical example is when the functionals are φi,j(x) = xixj . In the graphical model community, these are known as Ising models, and in the optimization community this is the problem of optimizing quadratic forms and has been studied by [CW04, AN06, AMMN06]. In the optimization version, the previous papers showed that in the worst case, one can get O(log n) factor multiplicative factor approximation of it, and that unless P = NP, one cannot get better than constant factor approximations of it. In the finite-temperature version, it is known that it is NP-hard to achieve a 1 + factor approximation to the partition function (i.e. construct a FPRAS) [SS12], but nothing is known about coarser approximations. We prove in this paper, informally, that one can get comparable multiplicative guarantees on the log-partition function in the finite temperature case as well – using the tools and insights we develop on the maximum entropy principles. Our methods are extremely generic, and likely to apply to many other exponential families, where algorithms based on linear/semidefinite programming relaxations are known to give good guarantees in the optimization regime. 2 Statements of results and prior work Approximate maximum entropy The main theorem in this section is the following one. Theorem 2.1. For any covariance matrix Σ of a centered distribution µ : {−1, 1}n → R, i.e. Eµ[xixj ] = Σi,j , Eµ[xi] = 0, there is an efficiently sampleable distribution µ̃, which can be sampled as sign(g), where g ∼ N (0,Σ + βI) and satisfies G 1 + β Σi,j ≤ Eµ̃[XiXj ] ≤ 1 1 + β Σi,j and has entropy H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β , for any β ≥ 1 31/2 . There are two prior works on computational issues relating to maximum entropy principles, both proving hardness results. [BGS14] considers the “hard-core” model where the functionals φt are such that the distribution µ(x) puts zero mass on configurations x which are not independent sets with respect to some graph G. 2In fact, we produce a distribution with entropy Ω(n), which implies the latter claim since the maximum entropy of any distribution of over {−1, 1}n is at most n They show that unless NP = RP, there is no FPRAS for calculating the potentials Jt, given the mean parameters Eµ[φt(x)]. [SV14] prove an equivalence between calculating the mean parameters and calculating partition functions. More precisely, they show that given an oracle that can calculate the mean parameters up to a (1 + ) multiplicative factor in time O(poly(1/ )), one can calculate the partition function of the same exponential family up to (1 +O(poly( ))) multiplicative factor, in time O(poly(1/ )). Note, the in this work potentially needs to be polynomially small in n (i.e. an oracle that can calculate the mean parameters to a fixed multiplicative constant cannot be used.) Both results prove hardness for fine-grained approximations to the maximum entropy principle, and ask for outputting approximations to the mean parameters. Our result circumvents these hardness results by providing a distribution which is not in the maximum-entropy exponential family, and is allowed to only approximately match the moments as well. To the best of our knowledge, such an approximation, while very natural, has not been considered in the literature. Provable variational methods The main theorems in this section will concern the approximation factor that can be achieved by degree-2 pseudo-moment relaxations of the standard variational principle due to Gibbs. ([Ell12]) As outlined before, we will be concerned with a particularly popular exponential family: Ising models. We will prove the following three results: Theorem 2.2 (Ferromagnetic Ising, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor 50 the value of logZ where Z is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j Ji,jxixj) for Ji,j > 0. Theorem 2.3 (Ising model, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor O(log n) the value of logZ where Z is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j Ji,jxixj). Theorem 2.4 (Ising model, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor O(logχ(G)) the value of logZ whereZ is the partition function of the exponential distribution µ(x) ∝ exp( ∑ i,j∈E(G) Ji,jxixj) where G = (V (G), E(G)) is a graph with chromatic number χ(G). 3 While a lot of work is done on variational methods in general (see the survey by [WJ08] for a detailed overview), to the best of our knowledge nothing is known about the worst-case guarantee that we are interested in here. Moreover, other than a recent paper by [Ris16], no other work has provided provable bounds for variational methods that proceed via a convex relaxation and a rounding thereof.4 [Ris16] provides guarantees in the case of Ising models that are also based on pseudo-moment relaxations of the variational principle, albeit only in the special case when the graph is “dense” in a suitably defined sense. 5 The results there are very specific to the density assumption and can not be adapted to our worst-case setting. Finally, we mention that in the special case of the ferromagnetic Ising models, an algorithm based on MCMC was provided by [JS93], which can give an approximation factor of (1 + ) to the partition function and runs in time O(n11poly(1/ )). In spite of this, the focus of this part of our paper is to provide understanding of variational methods in certain cases, as they continue to be popular in practice for their faster running time compared to MCMC-based methods but are theoretically much more poorly studied. 3Theorem 2.4 is strictly more general than Theorem 2.3, however the proof of Theorem 2.3 uses less heavy machinery and is illuminating enough that we feel merits being presented as a separate theorem. 4In some sense, it is possible to give provable bounds for Bethe-entropy based relaxations, via analyzing belief propagation directly, which has been done in cases where there is correlation decay and the graph is locally tree-like. [WJ08] has a detailed overview of such results. 5More precisely, they prove that in the case when ∀i, j,∆|Ji,j | ≤ ∆n2 ∑ i,j |Ji,j |, one can get an additive ( ∑ i,j Ji,j) approximation to logZ in time n O( ∆ 2 ). 3 Approximate maximum entropy principles Let us recall what the problem we want to solve: Approximate maximum entropy principles We are given a positive-semidefinite matrix Σ ∈ Rn×n with Σi,i = 1,∀i ∈ [n], which is the covariance matrix of a centered distribution over {−1, 1}n, i.e. Eµ[xixj ] = Σi,j , Eµ[xi] = 0, for a distribution µ : {−1, 1}n → R. We wish to produce a distribution µ̃ : {−1, 1}n → R with pairwise covariances that match the given ones up to constant factors, and entropy within a constant factor of the maximum entropy distribution with covariance Σ. 6 Before stating the result formally, it will be useful to define the following constant: Definition 3.1. Define the constant G = mint∈[−1,1] { 2 π arcsin(t)/t } ≈ 0.64. We will prove the following main theorem: Theorem 3.1 (Main, approximate entropy principle). For any positive-semidefinite matrix Σ with Σi,i = 1,∀i, there is an efficiently sampleable distribution µ̃ : {−1, 1}n → R, which can be sampled as sign(g), where g ∼ N (0,Σ+βI), and satisfies G1+βΣi,j ≤ Eµ̃[xixj ] ≤ 1 1+βΣi,j and has entropy H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β , where β ≥ 1 31/2 . Note µ̃ is in fact very close to the the one which is classically used to round semidefinite relaxations for solving the MAX-CUT problem. [GW95] We will prove Theorem 3.1 in two parts – by first lower bounding the entropy of µ̃, and then by bounding the moments of µ̃. Theorem 3.2. The entropy of the distribution µ̃ satisfies H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β when β ≥ 1 31/2 . Proof. A sample g from N (0, Σ̃) can be produced by sampling g1 ∼ N (0,Σ), g2 ∼ N (0, βI) and setting g = g1+g2. The sum of two multivariate normals is again a multivariate normal. Furthermore, the mean of g is 0, and since g1, g2 are independent, the covariance of g is Σ + βI = Σ̃. Let’s denote the random variable Y = sign(g1 + g2) which is distributed according to µ̃. We wish to lower bound the entropy of Y. Toward that goal, denote the random variable S := {i ∈ [n] : |(g1)i| ≤ cD} for c,D to be chosen. Then, we have: for γ = c−1c , H(Y) ≥ H(Y|S) = ∑ S⊆[n] Pr[S = S]H(Y|S = S) ≥ ∑ S⊆[n],|S|≥γn Pr[S = S]H(Y|S = S) where the first inequality follows since conditioning doesn’t decrease entropy, and the latter by the non-negativity of entropy. Continue the calculation we can get:∑ S⊆[n],|S|≥γn Pr[S = S]H(Y|S = S) ≥ ∑ S⊆[n],|S|≥γn Pr[S = S] min S⊆[n],|S|≥γn H(Y|S = S) = Pr [|S| ≥ γn] min S⊆[n],|S|≥γn H(Y|S = S) We will lower bound Pr[|S| ≥ γn] first. Notice that E[ ∑n i=1(g1) 2 i ] = n, therefore by Markov’s inequality, Pr [ n∑ i=1 (g1) 2 i ≥ Dn ] ≤ 1 D . On the other hand, if ∑n i=1(g1) 2 i ≤ Dn, then |{i : (g1)2i ≥ cD}| ≤ nc , which means that |{i : (g1) 2 i ≤ cD}| ≥ n− nc = (c−1)n c = γn. Putting things together, this means Pr [|S| ≥ γn] ≥ 1− 1 D . It remains to lower bound minS⊆[n],|S|≥γnH(Y|S = S). For every S ⊆ [n], |S| ≥ γn, denote by YS the coordinates of Y restricted to S, we get H(Y|S = S) ≥ H(YS |S = S) ≥ H∞(YS |S = S) = − log(max yS Pr[YS = yS |S = S]) 6Note for a distribution over {−1, 1}n, the maximal entropy a distribution can have is n, which is achieved by the uniform distribution. (where H∞ is the min-entropy) so we only need to bound maxyS Pr[YS = yS |S = S] We will now, for any yS , upper bound Pr[YS = yS |S = S]. Recall that the event S = S implies that ∀i ∈ S, |(g1)i| ≤ cD. Since g2 is independent of g1, we know that for every fixed g ∈ Rn: Pr[YS = yS |S = S, g1 = g] = Πi∈S Pr[sign([g]i + [g2]i) = yi] For a fixed i ∈ [S], consider the term Pr[sign([g]i + [g2]i) = yi]. Without loss of generality, let’s assume [g]i > 0 (the proof is completely symmetric in the other case). Then, since [g]i is positive and g2 has mean 0, we have Pr[[g]i + (g2)i < 0] ≤ 1 2 . Moreover, Pr [[g]i + [g2]i > 0] = Pr[[g2]i > 0] Pr [[g]i + [g2]i > 0 | [g2]i > 0] + Pr[[g2]i < 0] Pr [[g]i + [g2]i > 0 | [g2]i < 0] The first term is upper bounded by 12 since Pr[[g2]i > 0] ≤ 1 2 . The second term we will bound using standard Gaussian tail bounds: Pr [[g]i + [g2]i > 0 | [g2]i < 0] ≤ Pr [|[g2]i| ≤ |[g]i| | [g2]i < 0] = Pr[|[g2]i| ≤ |[g]i|] ≤ Pr[([g2]i)2 ≤ cD] = 1− Pr[([g2]i)2 > cD] ≤ 1− 2√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 which implies Pr[[g2]i < 0] Pr[[g]i + [g2]i > 0 | [g2]i < 0] ≤ 1 2 ( 1− 2√ 2π exp (−cD/2β) (√ β cD − (√ β cD )3)) Putting together, we have Pr[sign((g1)i + (g2)i) = yi] ≤ 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 Together with the fact that |S| ≥ γn we get Pr[YS = yS |S = s, g1 = g] ≤ 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3γn which implies that H(Y) ≥ − ( 1− 1 D ) (c− 1)n c log 1− 1√ 2π exp (−cD/2β) √ β cD − (√ β cD )3 By setting c = D = 31/4 √ β and a straightforward (albeit unpleasant) calculation, we can check that H(Y) ≥ n25 (31/4 √ β−1)2√ 3β , as we need. We next show that the moments of the distribution are preserved up to a constant G1+β . Lemma 3.1. The distribution µ̃ has G1+βΣi,j ≤ Eµ̃[XiXj ] ≤ 1 1+βΣi,j Proof. Consider the Gram decomposition of Σ̃i,j = 〈vi, vj〉. Then, N (0, Σ̃) is in distribution equal to (sign(〈v1, s〉), . . . , sign(〈vn, s〉)) where s ∼ N (0, I). Similarly as in the analysis of GoemansWilliamson [GW95], if v̄i = 1‖vi‖vi, we have G〈v̄i, v̄j〉 ≤ Eµ̃[XiXj ] = 2 π arcsin(〈v̄i, v̄j〉) ≤ 〈v̄i, v̄j〉. However, since 〈v̄i, v̄j〉 = 1 ‖vi‖‖vj‖ 〈vi, vj〉 = 1 ‖vi‖‖vj‖ Σ̃i,j = 1 ‖vi‖‖vj‖ Σi,j and ‖vi‖ =√ Σ̃i,i = √ 1 + β,∀i ∈ [1, n], we get that G 1 + β Σi,j ≤ Eµ̃[XiXj ] ≤ 1 1 + β Σi,j as we want. Lemma 3.2 and 3.1 together imply Theorem 3.1. 4 Provable bounds for variational methods We will in this section consider applications of the approximate maximum entropy principles we developed for calculating partition functions of Ising models. Before we dive into the results, we give brief preliminaries on variational methods and pseudo-moment convex relaxations. Preliminaries on variational methods and pseudo-moment convex relaxations Recall, variational methods are based on the following simple lemma, which characterizes logZ as the solution of an optimization problem. It essentially dates back to Gibbs [Ell12], who used it in the context of statistical mechanics, though it has been rediscovered by machine learning researchers [WJ08]: Lemma 4.1 (Variational characterization of logZ). Let us denote byM the polytope of distributions over {−1, 1}n. Then, logZ = max µ∈M {∑ t JtEµ[φt(x)] +H(µ) } (1) While the above lemma reduces calculating logZ to an optimization problem, optimizing over the polytopeM is impossible in polynomial time. We will proceed in a way which is natural for optimization problems – by instead optimizing over a relaxationM′ of that polytope. The relaxation will be associated with the degree-2 Lasserre hierarchy. Intuitively, M′ has as variables tentative pairwise moments of a distribution of {−1, 1}n, and it imposes all constraints on the moments that hold for distributions over {−1, 1}n. To defineM′ more precisely we will need the following notion: (for a more in-depth review of moment-based convex hierarchies, the reader can consult [BKS14]) Definition 4.1. A degree-2 pseudo-moment 7 Ẽν [·] is a linear operator mapping polynomials of degree 2 to R, such that Ẽν [x2i ] = 1, and Ẽν [p(x)2] ≥ 0 for any polynomial p(x) of degree 1. We will be optimizing over the polytopeM′ of all degree-2 pseudo-moments, i.e. we will consider solving max Ẽν [·]∈M′ {∑ t JtẼν [φt(x)] + H̃(Ẽν [·]) } where H̃ will be a proxy for the entropy we will have to define (since entropy is a global property that depends on all moments, and Ẽν only contains information about second order moments). To see this optimization problem is convex, we show that it can easily be written as a semidefinite program. Namely, note that the pseudo-moment operators are linear, so it suffices to define them over monomials only. Hence, the variables will simply be Ẽν(xS) for all monomials xS of degree at most 2. The constraints Ẽν [x2i ] = 1 then are clearly linear, as is the “energy part” of the objective function. So we only need to worry about the constraint Ẽν [p(x)2] ≥ 0 and the entropy functional. We claim the constraint Ẽν [p(x)2] ≥ 0 can be written as a PSD constraint: namely if we define the matrix Q, which is indexed by all the monomials of degree at most 1, and it satisfies Q(xS ,xT ) = Ẽν [xSxT ]. It is easy to see that Ẽν [p(x)2] ≥ 0 ≡ Q 0. 7The reason Ẽν [·] is called a pseudo-moment, is that it behaves like the moments of a distribution ν : {−1, 1}n → [0, 1], albeit only over polynomials of degree at most 2. Hence, the final concern is how to write an expression for the entropy in terms of the low-order moments, since entropy is a global property that depends on all moments. There are many candidates for this in machine learning are like Bethe/Kikuchi entropy, tree-reweighted Bethe entropy, logdeterminant etc. However, in the worst case – none of them come with any guarantees. We will in fact show that the entropy functional is not an issue – we will relax the entropy trivially to n. Given all of this, the final relaxation we will consider is: max Ẽν [·]∈M′ {∑ t JtẼν [φt(x)] + n } (2) From the prior setup it is clear that the solution to 2 is an upper bound to logZ . To prove a claim like Theorem 2.3 or Theorem 2.4, we will then provide a rounding of the solution. In this instance, this will mean producing a distribution µ̃ which has the value of ∑ t JtEµ̃[φt(x)] +H(µ̃) comparable to the value of the solution. Note this is slightly different than the usual requirement in optimization, where one cares only about producing a single x ∈ {−1, 1}n with comparable value to the solution. Our distribution µ̃ will have entropy Ω(n), and preserves the “energy” portion of the objective∑ t JtEµ[φt(x)] up to a comparable factor to what is achievable in the optimization setting. Warmup: exponential family analogue of MAX-CUT As a warmup, to illustrate the basic ideas behind the above rounding strategy, before we consider Ising models we consider the exponential family analogue of MAX-CUT. It is defined by the functionals φi,j(x) = (xi − xj)2. Concretely, we wish to approximate the partition function of the distribution µ(x) ∝ exp ∑ i,j Ji,j(xi − xj)2 . We will prove the following simple observation: Observation 4.1. The relaxation 2 provides a factor 2 approximation of logZ . Proof. We proceed as outlined in the previous section, by providing a rounding of 2. We point out again, unlike the standard case in optimization, where typically one needs to produce an assignment of the variables, because of the entropy term here it is crucial that the rounding produces a distribution. The distribution µ̃ we produce here will be especially simple: we will round each xi independently with probability 12 . Then, clearly H(µ̃) = n. On the other hand, we similarly have Prµ̃[(xi− xj) 2 = 1] = 12 , since xi and xj are rounded independently. Hence, Eµ̃[(xi − xj) 2] ≥ 12 . Altogether, this implies ∑ i,j Ji,jEµ̃[(xi − xj)2] +H(µ̃) ≥ 1 2 (∑ i,j Ji,jEν [(xi − xj)2] + n ) as we needed. 4.1 Ising models We proceed with the main results of this section on Ising models, which is the case where φi,j(x) = xixj . We will split into the ferromagnetic and general case separately, as outlined in Section 2. To be concrete, we will be given potentials Ji,j , and we wish to calculate the partition function of the Ising model µ(x) ∝ exp( ∑ i,j Ji,jxixj). Ferromagnetic case Recall, in the ferromagnetic case of Ising model, we have the conditions that the potentials Ji,j > 0. We will provide a convex relaxation which has a constant factor approximation in this case. First, recall the famous first Griffiths inequality due to Griffiths [Gri67] which states that in the ferromagnetic case, Eµ[xixj ] ≥ 0,∀i, j. Using this inequality, we will look at the following natural strenghtening of the relaxation 2: max Ẽν [·]∈M′;Ẽν [xixj ]≥0,∀i,j {∑ t JtẼν [φt(x)] + n } (3) We will prove the following theorem, as a straightforward implication of our claims from Section 3: Theorem 4.1. The relaxation 3 provides a factor 50 approximation of logZ . Proof. Notice, due to Griffiths’ inequality, 3 is in fact a relaxation of the Gibbs variational principle and hence an upper bound)of logZ . Same as before, we will provide a rounding of 3. We will use the distribution µ̃ we designed in Section 3 the sign of a Gaussian with covariance matrix Σ + βI , for a β which we will specify. By Lemma 3.2, we then have H(µ̃) ≥ n25 (31/4 √ β−1)2√ 3β whenever β ≥ 1 31/2 . By Lemma 3.1, on the other hand, we can prove that Eµ̃[xixj ] ≥ G 1 + β Ẽν [xixj ] By setting β = 21.8202, we get n25 (31/4 √ β−1)2√ 3β ≥ 0.02 and G1+β ≥ 0.02, which implies that ∑ i,j Ji,jEµ̃[xixj ] +H(µ̃) ≥ 0.02 ∑ i,j Ji,jẼν [xixj ] + n which is what we need. Note that the above proof does not work in the general Ising model case: when Ẽν [xixj ] can be either positive or negative, even if we preserved each Ẽν [xixj ] up to a constant factor, this may not preserve the sum ∑ i,j Ji,jẼν [xixj ] due to cancellations in that expression. General Ising models case Finally, we will tackle the general Ising model case. As noted in the previous section, the straightforward application of the results proven in Section 3 doesn’t work, so we have to consider a different rounding – again inspired by roundings used in optimization. The intuition is the same as in the ferromagnetic case: we wish to design a rounding which preserves the “energy” portion of the objective, while having a high entropy. In the previous section, this was achieved by modifying the Goemans-Williamson rounding so that it produces a high-entropy distribution. We will do a similar thing here, by modifying rounding due to [CW04] and [AMMN06]. The convex relaxation we will consider will just be the basic one: 2 and we will prove the following two theorems: Theorem 4.2. The relaxation 2 provides a factor O(log n) approximation to logZ when φi,j(x) = xixj . Theorem 4.3. The relaxation 2 provides a factor O(log(χ(G))) approximation to logZ when φi,j(x) = xixj for i, j ∈ E(G) of some graph G = (V (G), E(G)), and χ(G) is the chromatic number of G. Since the chromatic number of a graph is bounded by n, the second theorem is in fact strictly stronger than the first, however the proof of the first theorem uses less heavy machinery, and is illuminating enough to be presented on its own. Due to space constraints, the proofs of these theorems are forwarded to the appendix. 5 Conclusion In summary, we presented computationally efficient approximate versions of the classical maxentropy principle by [Jay57]: efficiently sampleable distributions which preserve given pairwise moments up to a multiplicative constant factor, while having entropy within a constant factor of the maximum entropy distribution matching those moments. Additionally, we applied our insights to designing provable variational methods for Ising models which provide comparable guarantees for approximating the log-partition function to those in the optimization setting. Our methods are based on convex relaxations of the standard variational principle due to Gibbs, and are extremely generic and we hope they will find applications for other exponential families.
1. What is the main contribution of the paper regarding the maximum entropy principle? 2. What are the strengths and weaknesses of the proposed approach to establishing approximate maximum entropy? 3. How does the paper utilize the results from approximate maximum entropy in variational methods and Ising models? 4. Are there any concerns or limitations regarding the usefulness of the approximation in practical applications? 5. Are there any difficulties in following certain parts or connections in the paper, particularly in understanding the relationship between approximate maximum entropy and calculating the partition function in Ising models?
Review
Review The paper tries to establish a practical adaptation of maximum entropy principle, since the principle itself is impractical from a computational perspective. Given a covariance matrix, it introduces a distribution such that 1) it is easy to draw sample from, 2) the pairwise covariance of which are within constant approximation of the elements of given covariance matrix, and 3) its entropy is lower-bounded by linear term (which basically means it's close to the maximum entropy). Moreover, using the results from approximate maximum entropy, the paper finds approximation to the partition function of a particular member of exponential family -- namely Ising models. These approximation guarantees are established with the means of degree-2 pseudo-moment relaxations of the standard variational principle. The paper takes a fairly novel approach to establish approximate maximum entropy. The usage of this approximation in variational methods and specifically on Ising models, however, is not quite clear to be useful since it depends on the constants in the asymptotic notation which are not clear. The organization of the paper is solid; nevertheless, there are some parts and connections that are hard to follow. Concretely the connection between Approximate Maximum Entropy and calculating the partition function in Ising models is not well-explained from reviewer's perspective.
NIPS
Title Asynchronous Decentralized SGD with Quantized and Local Updates Abstract Decentralized optimization is emerging as a viable alternative for scalable distributed machine learning, but also introduces new challenges in terms of synchronization costs. To this end, several communication-reduction techniques, such as non-blocking communication, quantization, and local steps, have been explored in the decentralized setting. Due to the complexity of analyzing optimization in such a relaxed setting, this line of work often assumes global communication rounds, which require additional synchronization. In this paper, we consider decentralized optimization in the simpler, but harder to analyze, asynchronous gossip model, in which communication occurs in discrete, randomly chosen pairings among nodes. Perhaps surprisingly, we show that a variant of SGD called SwarmSGD still converges in this setting, even if non-blocking communication, quantization, and local steps are all applied in conjunction, and even if the node data distributions and underlying graph topology are both heterogenous. Our analysis is based on a new connection with multi-dimensional load-balancing processes. We implement this algorithm and deploy it in a super-computing environment, showing that it can outperform previous decentralized methods in terms of end-to-end training time, and that it can even rival carefully-tuned large-batch SGD for certain tasks. 1 Introduction Decentralized optimization has recently emerged as a promising approach for scaling the distributed training of machine learning models, in particular via stochastic gradient descent (SGD) [Lian et al., 2017, Tang et al., 2018, Koloskova et al., 2019a]. Its key advantage is that it removes the need for a central coordinator node in distributed training, and therefore it can allow for high scaling. The general decentralized optimization setting is the following: we are given n nodes, each with a subset of data from some distribution, which can communicate over some underlying graph topology. In each global round, each node samples some local data, performs a local gradient step, and it is paired with a neighbor, which may be chosen randomly. The nodes exchange model information pairwise, and then update their models, often via direct model averaging. Variants of this setting have been analyzed since pioneering work by Tsitsiklis [1984], for various estimation and optimization algorithms [Xiao and Boyd, 2004, Nedic and Ozdaglar, 2009, Johansson et al., 2009, Shamir and Srebro, 2014] and have seen renewed interest given its applicability to training deep neural networks (DNNs) at scale, e.g. [Lian et al., 2017, 2018, Assran et al., 2018]. Recently, there has been significant focus on reducing the synchronization overheads for decentralized training, usually employing three approaches: 1) implementing faster non-blocking communication between communication partners at a round [Lian et al., 2018, Assran et al., 2018], which may cause them to see stale versions of their models, 2) allowing nodes to take local steps in between 35th Conference on Neural Information Processing Systems (NeurIPS 2021). their communication rounds [Wang and Joshi, 2018, Koloskova et al., 2020], and 3) applying quantization to the communication [Lu and De Sa, 2020, Tang et al., 2018, Koloskova et al., 2019a,b]. The above impressive line of work contributes a rich set of algorithmic and analytic ideas; however, one common limitation is that the algorithms are usually set in the synchronous gossip model, which requires all nodes to perform their communication in lock-step rounds, and share a common notion of time, thus reducing their practicality. To mitigate this fact, some references, e.g. [Lian et al., 2018, Assran et al., 2018, Lu and De Sa, 2020] partially relax this requirement, although they do so at the cost of additional assumptions, or reduced guarantees, as we discuss in related work. Another relative limitation is that the analyses are usually customized to the bespoke communication-reduced methods being applied, and therefore are hard to generalize to other methods. Our Contribution. In this paper, we consider decentralized SGD-based optimization in the simpler, but harder to analyze, asynchronous gossip model [Xiao and Boyd, 2004], in which communication occurs in discrete, randomly chosen pairings among nodes, and does not require a common notion of time. We prove that a new variant of SGD we call SwarmSGD converges in this setting, even though it supports all three communication-reduction approaches mentioned above in conjunction. Our analysis generalizes to heterogeneous data distributions and communication topologies. At a high level, SwarmSGD works as follows. Each node i maintains a local model estimate Xi based on which gradients are generated, and a shared buffer where quantized models are stored for communication with other nodes. In each step, node i first computes a sequence of H local gradient steps, which it does not yet apply. Next, the node chooses communication partner j, uniformly at random among its neighbors. Then, node i reads from its own communication buffer and from the communication buffer of j, obtaining quantized models Qi and Qj . A subtlety here is that Qi is not necessarily the quantized version of the model Xi, since other nodes can write concurrently to i’s buffer. The node i then averages Qi with Qj , and updates the neighbor’s remote buffer to the quantized average. Finally, it applies its local gradient steps to the resulting average, adopts this as its next model Xi, and a writes quantized version of it in its own shared buffer. This procedure can be implemented in a deadlock-free, non-blocking manner, by using either shared-memory or the remote direct-memory access (RDMA) calls supported by MPI [Woodall et al., 2006]. Importantly, the communication partner j does not need to block its computation during communication, and may be contacted by more than one interaction partner during a single local step, although we do assume that individual reads and writes are performed atomically. A key component of this procedure is the quantization scheme: directly using an unbiased quantizer, e.g. [Alistarh et al., 2017] would destroy convergence guarantees, as the quantization error would be proportional to the model norm, which may not be bounded. Instead, we use a customized variant of the quantization scheme of Davies et al. [2021], whose error depends on the distance between the point being quantized (the model), and an arbitrary reference point, provided as a parameter. We prove that each node can reliably use its own model as a reference point to quantize and de-quantize messages placed in its buffer by other nodes. In turn, this requires care in the analysis. Specifically, the key observation behind our analysis is exactly in showing that the nodes’ local models stay well-enough concentrated around their mean throughout optimization to allow for correct decoding of quantized models, which in turn implies joint convergence by the nodes towards a point of vanishing gradient. This concentration follows via a non-trivial super-martingale argument. If nodes take a constant number of local SGD steps between communication steps, then SwarmSGD has Θ( √ n) speedup to convergence for non2-convex objectives. This matches results from previous work which considered decentralized dynamics but with global synchronization [Lian et al., 2017]. Experimental Validation. We apply SwarmSGD to train deep neural networks on image classification and machine translation (NMT) tasks, deployed on the Piz Daint supercomputer [Piz, 2019]. Experiments confirm the intuition that the average synchronization cost of SwarmSGD per iteration is low: it stays at less than 10% of the batch computation time, and remains constant as we increase the number of nodes. For example, using SwarmSGD, we are able to train a TransformerXL [Vaswani et al., 2017] model on WMT17 (En-Ge) 1.5× faster than a highly-optimized largebatch SGD baseline, and to slightly higher accuracy, without additional hyper-parameter tuning. At the same time, due to the reduced communication frequency, Swarm also improves upon the speed of the previous practical decentralized methods, e.g. [Lian et al., 2017, 2018, Assran et al., 2018]. Importantly, we also note that, in less overparametrized settings such as training residual CNNs [He et al., 2016] on ImageNet [Russakovsky et al., 2015], nodes do need to perform more iterations over the dataset relative to the baseline in order to recover full accuracy. This is predicted by the analysis, and confirms similar findings in previous work [Assran et al., 2018]. Overall, our method does appear well-suited to training large modern models at node counts where global synchronization among all nodes is prohibitively expensive. Related Work. Decentralized optimization has a long history [Tsitsiklis, 1984], and is related to the study of gossip algorithms, e.g. [Kempe et al., 2003, Xiao and Boyd, 2004, Boyd et al., 2006]. Gossip is usually studied in one of two models [Boyd et al., 2006]: synchronous, structured in global rounds, where each node interacts with a randomly chosen neighbor, forming a matching, and asynchronous, where each node wakes up at random times, e.g. given by a Poisson clock, and picks a random neighbor to interact with. Several classic optimization algorithms have been analyzed in the asynchronous gossip model [Nedic and Ozdaglar, 2009, Johansson et al., 2009, Shamir and Srebro, 2014]. In this paper, we focus on analyzing decentralized SGD in this model. As mentioned, the growing line of work on decentralized optimization for machine learning has mostly focused on variants of the synchronous gossip model. Specifically, Lian et al. [2017] considered this setting in the context of DNN training, while and Tang et al. [2018] and Koloskova et al. [2019b] also analyzed decentralized optimization with quantization in the synchronous model. Wang and Joshi [2018] and Koloskova et al. [2020] provided analysis frameworks for synchronous decentralized SGD with local updates, and possibly changing topologies. Lian et al. [2018] and Assran et al. [2018] focused specifically on reducing synchronization costs in this setting, and proposed algorithms with partially non-blocking communication, in which nodes may read a stale version of the interaction partner’s information, modelling e.g. a communication buffer. However, the maximum staleness must be bounded by a global variable τ , which must be enforced throughout the execution. As observed by Assran et al. [2018], enforcing this bound can cause blocking, and therefore the authors of these works propose to implement a relaxed roundbased model, in which nodes interact once per round in perfect matchings. Their algorithms provide O(1/ √ Tn) convergence rates, under analytical assumptions. Upon careful examination, we find that their analysis approach can be extended to the asynchronous gossip model we consider, by defining the “contact matrices” to correspond to pairwise interactions. However, this introduces two significant limitations. First, the analysis will not support local gradient updates to models nor quantized communication. If we remove these practical relaxations, our technique yields better bounds, as our potential analysis is specifically-tailored to this dynamic interaction model. Second, as we detail in the Appendix, some of their technical conditions imply existence of global synchronization. For Assran et al. [2018], as we detail in the Appendix, their analysis would not guarantee any non-trivial speedup due to parallelization in the asynchronous gossip model. We describe these issues in detail and present a systematic comparison in Appendix A. Lu and De Sa [2020] provided a novel approach to analyze decentralized SGD with quantization and limited asynchrony: specifically, their algorithm requires blocking communication, i.e. nodes have to synchronize explicitly during interactions, but may see old versions of eachothers’ models. More precisely, during each interaction, both parties are responsible for updating their local models, meaning that once node is woken up (we call it initiator node) and chooses interaction partner it has to block until the partner is woken up as well. In our case, initiator can update both its local model and the local model of its partner and proceed to the next step without blocking. Koloskova et al. [2019a] use a similar update rule in the synchronous model. Zhang and You [2021] recently proposed a decentralized algorithm which is fully-asynchronous as long as node activation rates and message delays are bounded. As noted earlier, bounding activation rates does imply blocking; however, tolerating (bounded) message delays does improve over our approach of updating models using atomic writes. The setting further differs in that they assume that nodes compute full (nonstochastic) gradients, as well as that the loss function satisfies the PL condition. In sum, we are the first to explicitly consider the asynchronous gossip model, and the impact of local updates, asynchrony, and quantization used in conjunction together with decentralized SGD. Our technique is new, relies on a fine-grained analysis of individual interactions, and can yield improved bounds even in the case where H = 1. Further, our algorithm is the first to allow for both communication-compression and non-blocking communication. From the implementation perspective, the performance of our algorithm matches or improves that of previous methods, notably D-PSGD [Lian et al., 2017], AD-PSGD [Lian et al., 2018] and SGP [Assran et al., 2018]. 2 Preliminaries The Distributed System Model. We consider a model which consists of n ≥ 2 nodes, each of which is able to perform local computation. We assume that communication network of nodes is a graph G with spectral gap λ2, which denotes the second smallest eigenvalue of the Laplacian of G. Let ρmax, ρmin be the maximum and minimum degrees in G, respectively. We will focus on densely-connected topologies, which model supercomputing and cloud networks: for instance, the standard Dragonfly topology [Kim et al., 2008, Besta and Hoefler, 2014] is regular, densely connected and low-diameter, mimicking regular expanders. The execution is modelled as occurring in discrete steps, where in each step a new node (the “initiator”) is sampled, and can then contact one of its neighbors (the “responder”) uniformly at random. (At the algorithm level, the initiator is “sampled” once it completes its current computational step, and seeks to interact with a neighbor.) We denote the number of steps for which we run by T . Globally, the communication steps can be seen as a sequence of sampled directed communication edges. Thus, the basic unit of time is a single pairwise interaction between two nodes. Notice however that in a real system Θ(n) of these interactions could occur in parallel. Thus, the standard global time measure is parallel time, defined as the total number of interactions divided by n, the number of nodes. Parallel time intuitively corresponds to the average number of interactions per node until convergence. This model is identical to the asynchronous gossip model [Xiao and Boyd, 2004], and to the population protocol model [Angluin et al., 2006]. Stochastic Optimization. We assume that the agents wish to jointly minimize a d-dimensional, differentiable function f : Rd → R. Specifically, we will assume the empirical risk minimization setting, in which agents are given access to a set of m data samples S = {s1, . . . , sm} coming from some underlying distribution D, and to functions `i : Rd → R which encode the loss of the argument at the sample si. The goal of the agents is to converge on a model x∗ which minimizes the empirical loss over the m samples, that is x∗ = argminxf(x) = argminx(1/m) ∑m i=1 `i(x). We assume that each agent i has a local function fi associated to its fraction of the data, i.e ∀x ∈ Rd: f(x) = ∑n i=1 fi(x)/n. Agents employ these samples to run a decentralized variant of SGD, described in detail in the next section. For this, we will assume that each agent i has access to unbiased stochastic gradients g̃i of the function fi, which are functions such that E[g̃i(x)] = ∇fi(x). Stochastic gradients can be computed by each agent by sampling i.i.d. the distribution D, and computing the gradient of f at θ with respect to that sample. Our analysis also extends to the case where each agent is sampling from its own partition of data. We assume the following conditions about the objective function, although not all our results require the second moment bound: 1. Smooth Gradients: The gradient ∇fi(x) is L-Lipschitz continuous for some L > 0, i.e. for all x, y ∈ Rd and agent i: ‖∇fi(x)−∇fi(y)‖ ≤ L‖x− y‖. (1) 2. Bounded Variance: The variance of the stochastic gradients is bounded by some σ2 > 0, i.e. for all x ∈ Rd and agent i: E ∥∥∥g̃i (x)−∇fi (x)∥∥∥2 ≤ σ2. (2) 3. Bounded Local Function Variance: There exists ς2 > 0, such that for all x ∈ Rd: n∑ i=1 ∥∥∥∇f (x)−∇fi (x)∥∥∥2 n ≤ ς2. (3) 4. Bounded Second Moment: The second moment of the stochastic gradients is bounded by some M2 > 0, i.e. for all x ∈ Rd and agent i: E ∥∥∥g̃i (x)∥∥∥2 ≤M2. (4) Note that throughout this paper for any random variable X , by E‖X‖2 we mean E[‖X‖2]. Each node has a communication buffer, which, for simplicity, we assume can be read and written atomically by each node; Importantly, buffers can only hold quantized quantized vectors. Quantization Procedure. We use a quantization function which follows from Lemma 23 in (the full version of) Davies et al. [2021]. Corollary 2.1. (Quantization for Communication Buffers) Fix parameters R and > 0. There exists a quantization procedure defined by an encoding function EncR, : Rd → {0, 1}∗ and a decoding function DecR, = Rd × {0, 1}∗ → Rd such that, for any vector x ∈ Rd which we are trying to quantize, and any vector y which is used by decoding, which we call the decoding key, if ‖x − y‖ ≤ RRd then with probability at least 1 − log log(‖x−y‖ )O(R −d), the function QR, (x) = DecR, (y,EncR, (x)) has the following properties: 1. (Unbiased decoding) E[QR, (x)] = E[DecR, (y,EncR, (x))] = x; 2. (Error bound) ‖QR, (x)− x‖ ≤ (R2 + 7) ; 3. (Communication bound) To compute DecR, (y,EncR, (x)), only the first B bits of EncR, (x) are needed, where B = O ( d log(R ‖x− y‖ ) ). Proof. Lemma 23 of the full version of Davies et al. [2021] provides similar guarantees as the ones we want to prove, but they assume interactive message-passing communication between an encoding node u and a decoding node v. However, in their setting, the messages sent by u are non-adaptive: u simply sends quantizations using an increasing number of bits, until v replies confirming that it has decoded successfully. The number of bits sent during communication is upper bounded by O ( d log(R ‖x− y‖ ) ), where x is a vector node u is sending and y is vector node v is using for decoding. In our setting, we use communication buffers which, so node u can simply append all of its potential messages together as QR, (x). Critically, notice that node u should append enough bits so that the decoding is possible (Since in our setting there is no way for v to acknowledge that it received enough number of bits). This can be done in two ways. If u knows the distance between x and y. then u can simply write O ( d log(R ‖x− y‖ ) bits in the register. In the second case, u does not know the distance. Let T be the total number of times nodes communicate throughout our algorithm. We will show that with high probability all distances between encoded and decoding vectors will be at most T 17 R (dependence on T stems from the fact that we wish to show an upper bound with high probability, please see Lemma B.19 in the Appendix), and therefore at most O(d log T ) bits for quantization will suffice in the worst case. Thus, the node writes O(d log T ) bits in the register , but when v tries to decode, it does not need all those bits: it reads and uses only the first O ( log(R ‖x− y‖ ) bits. Counting Communication Cost. We emphasize that, when we calculate the number of bits needed by quantization we actually aim to measure the number of bits exchanged between u and v. In the setting we consider, which has local registers/communication buffers, this is the number of bits spent to read from (or to write to) the non-local register. Since the second case above involves writing a relatively large number of bits, we will use it only when u is writing a quantized value to its own register/buffer, and so does not need to communicate the bits. Then, only the O ( log(R ‖x− y‖ ) bits read by v need to be communicated. To summarize, in our algorithm we will always ensure that whenever some node uwrites a quantized value, it either knows the key which will be used for decoding it, or is writing to its local register. 3 The SwarmSGD Algorithm We now describe a decentralized variant of SGD, designed to be executed by a population of n nodes, interacting over the edges of communication graph G, as described above. The algorithm proceeds in individual communication steps, where in each step a node which has completed its local computation, seeks a random neighbor to communicate with. We will alternatively say that node gets activated (once it finished computation) and then becomes initiator of the interaction. The Communication Registers. The communication buffer of each node i consists of two registers: one containing an encoded version of its own (possibly outdated) model, which will only be written to by node i itself, and one for holding an encoded version of its current model, which will only be written to by other nodes. (This second register can be seen as a “communication queue” for the nodes wishing to communicate with i.) Initially all registers contain zero vectors. Parallel Execution. For simplicity we will skip the details of quantization, and assume that nodes write and read quantized models directly, without encoding and decoding steps. Both current and outdated models are zero vectors initially. Each node i computes a number of local gradients based on its last model view, and other nodes may update this model while i is computing. Hence, only after node i is done with computing local gradients does it read its updated current model. Let X̂i be the value of the outdated model and let Xi be the value of the current model. Node i computes the average of quantized models Q(Xi)+Q(Xj)2 and writes it in a register which contains current model of node j. Next, it computes Q(Xi)+Q(Xj)2 − ηh̃(X̂i) (where η is a learning rate and h̃(X̂i) is a sum of local gradients), and writes it in both of its local registers, one containing the current model and one containing the outdated model. Once the write is finished, it again proceeds to compute local gradients, based on the view Q(Xi)+Q(Xj)2 − ηh̃(X̂ i t). Sequential model. For the analysis, it is useful to map these parallel interactions to a sorted sequence of sequential ones. Thus, time steps track the interactions between agents, and each interaction consists of random number of local steps steps which activated node performs, plus one averaging step where activated node (or initiator node) contacts its random neighbour. The analysis assumes that nodes get activated randomly, by independent Poisson clocks, which leads to a uniform global sampling distribution. In practice, this could be approximated by having the number of local gradient steps executed by each node be a geometric random variable of mean H . For the sake of practicality, our experiments will take H to be a small constant, instead of a random variable, which yields similar results. The pseudocode from the point of view of a single node i which was activated at step t+ 1 is given in Algorithm 1. For t ≥ 0, let Enc(X̂it) and Enc(Xit) be the values written in the registers containing the outdated and the current model of agent i after t steps, respectively. That is, Xit is the current model of agent i and X̂it is the outdated model. The Communication Procedure. Since i was activated at step t we will assume that it has already computedHi local gradients using the outdated model X̂it , whereHi is a geometric random variable with mean H , as follows. Let h̃0i (X̂ i t) = 0 d; for indices 1 ≤ q ≤ Hi, let h̃qi (X̂it) = g̃i(X̂it −∑q−1 s=0 ηh̃ s i (X̂ i t)) be the q-th local gradient. Then, let h̃i(X̂ i t) = ∑Hi q=1 h̃ q i (X̂ i t) be the sum of all computed local gradients. Or alternatively, since we are in a sequential setting, we can assume that i does computation at step t. First, i retrieves Q(Xit) (the quantized version of its current model), by decoding Enc(Xit) using key Q(X̂ i t). We would like to note that i can obtain Q(X̂ i t) simply by decoding Enc(X̂it), using key X̂ i t (which it knows, to full precision, since it calculated the value itself), and this step does not cost any communication bits since all of the terms involved are local to i’s registers. Then, it contacts its interaction partner j. Node i calculates Q(X̂jt ) by decoding Enc(X̂ j t ), again using X̂it as a key, and then it retrieves Q(X j t ) by decoding Enc(X j t ) with key Q(X̂ j t ). Then, i calculates Xit+1 = Q(Xit) 2 + Q(Xjt ) 2 − ηh̃i(X̂ i t) and X j t+1 = Q(Xjt ) 2 + Q(Xit) 2 . Next, node i calculates Enc(Xit+1) and writes to its own register for its outdated models. Here, we use the first case for quantization using Corollary 2.1: i is not aware of the key that other nodes will use for decoding, but since it is writing to its own local register, it can afford to use the worst-case O(d log T ) bits. Additionally, it writes Enc(Xit+1) to its own register containing current model, so that there are enough bits for Q(X̂it+1). (Note that X̂ i t+1 = X i t+1 has to be used as decoding key.) Finally, it calculates Enc(Xjt+1) and writes it in the register which contains the current model of j, using enough bits that it can be decoded using Q(X̂jt+1) (we have that X̂ j t+1 = X̂ j t ) . Notice that, the way our algorithm is specified, every node which tries to decode Enc(Xjt+1) will use Q(X̂ j t+1) as a key (which i knows), hence Corollary 2.1 holds in this case as well. We emphasize the fact that all this communication is one-way, as it does not require j’s intervention. By Corollary 2.1 the total number of bits used is : O ( d log( R ‖X̂it − X̂ j t ‖) ) +O ( d log( R ‖Q(X̂jt )−X j t ‖) ) +O ( d log( R ‖Q(X̂jt )−X j t+1‖) ) . (Recall that we count only reading and writing to other registers, and do not count operations i performs on its own registers.) We will show that we can make the probability of any instance of quantization failing less than 1/T c, for some sufficiently large constant c, by setting the constant factor in the number of bits sufficiently high. Then, we can take a union bound over all instances of quantization throughout the algorithm, to show that none fail with high probability in T . Henceforth, we will then be able to prove the convergence of our algorithm conditioned on this event. Avoiding race conditions. An interesting question is what happens when multiple nodes contact j concurrently. For conciseness, our pseudocode assumes that the update sequence in lines 8– Algorithm 1 Sequential SwarmSGD pseudocode for each interaction between nodes i and j. 1: % Let G be a communication graph. 2: % Initial models X10 = X20 = ... = Xn0 3: for t = 0 to T − 1 do 4: Sample the initiator node i uniformly at random. 5: Node i samples a node j, adjacent to it in G, uniformly at random. 6: Let t− τ it be the last step at which node i was chosen as initiator. 7: Let X̂it = Xit−τit be its model from that step. 8: Q(Xit)← Dec(Q(X̂it), Enc(Xit)) 9: Q(X̂jt )← Dec(X̂it , Enc(X̂ j t )) 10: Q(Xjt )← Dec(Q(X̂ j t ), Enc(X j t )) 11: Xit+1 ← Q(Xit)/2 +Q(Xjt )/2− ηh̃i(X̂it−1) 12: Xjt+1 ← Q(Xit)/2 +Q(X j t )/2 13: Write Enc(Xit+1) to the registers containing current and outdated models of node i 14: Write Enc(Xjt+1) to the register containing current model of node j 15: For k 6= i, j, Xkt+1 = Xkt . 16: end for 14 happens atomically, but this sequence can cause a data race. To mitigate this, we can use a bounded non-blocking queue [Michael and Scott, 1996] at each node instead of a single buffer. Thus, instead of updating the buffer value atomically, each node simply appends the corresponding quantized model mean to j’s communication queue. In practice, this queue is extremely unlikely to be contended, since communication collisions are rare. 4 The Convergence of SwarmSGD Let µt = ∑n i=1X i t/n be the mean over node models at time t. Our main result is the following: Theorem 4.1. Assume the total number of steps T ≥ 10n, learning rate η = n/ √ T , quantization parameters R = 2 + T 3 d and = ηHM(R2+7) . Then, with probability at least 1 − O( 1 T ) we have that Algorithm 1 converges at rate 1 T T−1∑ t=0 E‖∇f(µt)‖2 ≤ 2(f(µ0)− f(x∗)) H √ T + 6(σ2 + 6Hς2)√ T + 12HM2√ T + C n2ρ3maxH 2L2M2 Tρminλ22 , for constant C, and uses O ( d log ( ρ2max ρminλ2 ) + log T ) expected communication bits per step. Discussion. First, this notion of convergence is standard in the non-convex case [Lian et al., 2015, 2017, 2018], and each of the upper bound terms has an intuitive interpretation: the first represents the reduction in loss relative to the initialization, and gets divided by the number of local steps H , since progress is made in this term in every local step; the second represents the noise due to stochasticity, and is naturally linear in H , as H steps are taken in expectation between two interactions. (Recall that in our model T is the number of interactions, and TH is the expected number of gradient steps.) The fourth term encodes overheads caused by local steps, quantization, and graph structure; however, it is usually seen as negligible (cf. [Lu and De Sa, 2020]), due to division by T . The third term is the critical one, as it implies a dependence on the second-moment bound. Intuitively, this term appears because our algorithm combines both non-blocking communication, and quantization: first, unlike prior work, we do not assume an explicit delay upper bound τ on communication; in conjunction with quantization, the unbounded delay this implies that our estimate on the model average µt may become dependent on M for large delays, which causes this dependency. While this limitation appears inherent, we are able to remove it if we eliminate quantization: in this case, we get a negligible dependency on M . We formalize this in Corollary 4.2. Second, if we focus on the total number of steps to reach some error bound, we notice an interesting trade-off between the linear reduction in H in the first term, due to local steps, and the linear increase in H in the other terms. Notice that, for dense and low-diameter graphs, such as the regular expanders popular in cluster networks, our convergence bound has no dependence in the graph parameters, and communication is linear in d. However, one limitation is that we could have a log n dependency in the communication for highly-irregular and poorly-connected graphs. Finally, note that time T here counts total interactions. However, Θ(n) pairwise interactions occur independently in parallel, and so we can slightly abuse notation and replace T by nT in the above formula, to obtain optimal Θ( √ n) speedup in terms of wall-clock time. Yet, this speedup is dampened by the variance due to noisy local gradient steps, a fact which we will revisit in the experimental section. Proof Overview. At a high level, the argument rests on two technical ideas. The first is that, in spite of noise and local steps, the nodes’ parameters remain concentrated around the mean µt. The second is to leverage this, and bound the impact of stochastic noise and model staleness on convergence. In particular, the main technical difficulty in the proof is to correctly “encode” the fact that parameters are well concentrated around the mean. A natural approach is to bound the model variance Γt after t interactions. Formally, we define Γt = ∑n i=1 ‖Xit − µt‖2, where µt = ∑n i=1X i t/n, as before. We bound the expected evolution of Γt over time, depending on the learning rate, number of local steps, quantization parameter and the bound provided by the assumption on the stochastic gradients (the bound M2). The critical point is that the upper bound on the expectation of Γt does not depend on the number of interactions t. More precisely, if all the above hyper-parameters are constant, we get that E[Γ(t)] = O(n). Our approach brings over tools from classic load-balancing [Berenbrink et al., 2009], to the multi-dimensional case. Three key elements of novelty in our case are that (1) for us the load balancing process is dynamic, in the sense that new loads, i.e. gradients, get continually added; (2) the load-balancing process we consider is multi-dimensional, whereas usually the literature considers simple scalar weights; (3) the models can be outdated and quantized, which leads to a complex, noisy load-balancing process. We resolve the this third and most challenging issue by using carefully-defined auxiliary potentials. Removing the Second-Moment Bound. Upon reflection, we notice that can render the dependency on M2 negligible if we do not use quantization, but otherwise keep the algorithm the same: Corollary 4.2. Given the previous assumptions and learning rate η = n/ √ T , for some constant C, we have that the Algorithm 1 where quantization is the identity converges at rate 1 T T−1∑ t=0 E‖∇f(µt)‖2 ≤ 2(f(µ0)− f(x∗)) H √ T + 6(σ2 + 6Hς2)√ T + Cn2ρ3maxH 2L2M2 Tρminλ22 . Notice that in this case all the term containing the second moment bound M2 is dampened by a factor of 1T , hence we can assume that Algorithm 1 converges at close-to optimal rate O ( 2(f(µ0)−f(x∗)) H √ T + 6H(σ 2+6ς2)√ T ) . This result still improves upon previous analyses [Lian et al., 2018, Assran et al., 2018, Lu and De Sa, 2020] in the sense that communication is completely nonblocking (there is no τ ), and we allow for local steps. Further, in the absence of quantization and assuming that the nodes perform single local gradient step, we can entirely remove assumption (4), when T is large enough (e.g for the fully connected graph we will need T ≥ Ω(n3)). More precisely, we can attain convergence rate ofO ( f(µ0)−f(x∗)√ T + (σ2+ς2)√ T + n3ρ4maxL 2(σ2+ς2) Tρminλ32 ) .We leave the proof of this last extension to the full version of this work. 5 Experimental Results In this section, we validate our analysis, by applying the algorithm to training deep neural networks for image classification and machine translation. We map the algorithm onto a multi-node supercomputing setting, in which we have a large number of compute nodes, connected by fast communication links. The key overhead in this setting is synchronization: at large node counts, the cost of synchronizing all nodes so they execute in lock-step can be very high, see e.g. [Li et al., 2019] for numerical results on different workloads. Transmission cost also becomes significant at large node counts and large model sizes. Decentralized methods can mitigate this overhead, since nodes synchronize only sporadically and in pairs. Target System and Implementation. We run SwarmSGD on the CSCS Piz Daint supercomputer, which is composed of Cray XC50 nodes, each with a Xeon E5-2690v3 CPU and an NVIDIA Tesla P100 GPU, using a state-of-the-art Aries interconnect over a Dragonfly network topology, which is regular. Please see Piz [2019] for more details. We implemented SwarmSGD in Pytorch and TensorFlow using MPI-based primitives, with non-blocking averaging. The Pytorch implementation is on top of SGP framework [Assran et al., 2018], and uses SwarmSGD to train ResNets on the CIFAR- 10/100 Krizhevsky et al. [2014] and ImageNet [Russakovsky et al., 2015] datasets, while we use the TensorFlow implementation to train the original version of the Transformer-XL model [Vaswani et al., 2017] on the WMT17 (En-Ge) dataset. All algorithms use the same topology overlay, which is fully-connected: according to their theory and experiments, this well-connected overlay should maximize convergence speed. SGP was run with overlap factor 1, following Assran et al. [2018]. Training Process. Our training methodology follows data-parallel training, with some differences due to decentralization, and is identical to previous work on decentralized and local SGD, e.g. Lian et al. [2017], Assran et al. [2018], Lin et al. [2018]. Training proceeds in epochs, each of which corresponds to processes collectively performing a full pass over the dataset. At the beginning of each epoch, we re-shuffle the dataset and partition it among processes [Lin et al., 2018]. As noted in previous work [Lian et al., 2017, 2018, Assran et al., 2018] variants of decentralized SGD are not always able to recover sequential SGD accuracy within the same number of epochs as this baseline. This is justified theoretically, as the slower mixing can affect convergence, but also intuitively, as each model sees significantly fewer updates per epoch. Thus, we will allow the decentralized schemes to execute for more epochs, by a constant multiplier factor between 1 and 3. To reduce multipliers, we experimented with SlowMo [Wang et al., 2019]; we found that it improved results across methods on CIFAR-10, but not at ImageNet scale; therefore, the provided results do not include it. Once we have fixed the number of epochs, we do not alter the other training hyperparameters: in particular, the learning rate schedule, momentum and weight decay terms are identical to the standard values for sequential SGD, for each individual model. Accuracy and Speed. We first examined whether SwarmSGD can in fact recover full accuracy versus the sequential or large-batch SGD baselines. In Table 1 we provide an overview of parameter values to recover large-batch SGD accuracy (following Goyal et al. [2017]) using SwarmSGD, on the ResNet, ImageNet and CIFAR tasks. We execute for 32 nodes on ImageNet, and 8 nodes on CIFAR-10. (Local batch sizes are 128 for ResNet20 and ResNet50, and 128 for ResNet18. Quantization is not applied in these experiments.) The results show that Swarm can recover or slightly exceed the accuracy of the large-batch baselines, and that it has lower practical communication cost relative to existing methods (see Figure 2(b), where we separate the average computation cost per batch). However, Swarm requires significant additional passes over the data (up to 2.7×) to achieve full accuracy, which negates its performance benefits in this specific setting, relative to large-batch SGD. (Please see the Supplementary for end-to-end time comparisons.) This partly negative finding is in line with previous work on decentralized methods [Assran et al., 2018]. Next, we examine accuracy for the WMT17 task. The results are provided in Figure 1(a), in accuracy-vs-time format, for 16 and 32 nodes, executing for 10 global epochs. Here, the large-batch SGD (LB-SGD) baseline (BLEU score 26.1 at 16 nodes) is a poor alternative at high node counts due to model size: its throughput is low, and drops catastrophically at 64 nodes due to the network becoming severely bandwdith-bottlenecked (see Figure 1(b)). At 16 nodes, Swarm slightly exceeds the baseline accuracy at 26.17 BLEU, for an end-to-end speedup of ∼ 1.5×. In the same setting, Swarm outperforms all other decentralized methods (the fastest previous method, AD-PSGD, is 30% slower, and less accurate), both in terms of BLEU score, and in terms of end-to-end time.(The objective loss graph is similar, and is provided in the Appendix). At 32 nodes, all decentralized methods reach lower scores (∼ 23.5) after 10 epochs. However, we observed experimentally that running Swarm for an additional 5 epochs (multiplier 1.5) at 32 nodes recovered a BLEU score of ∼ 25.72, which is 30% faster than the 16-node version in terms of end-to-end time (omitted for visibility). In addition, we investigated 1) the accuracy of the real average of all models throughout training: it is usually more accurate than an arbitrary model, but not significantly so, corroborating the claim that individual models tend to stay close to the mean; 2) the influence of the number of local steps on accuracy: perhaps surprisingly, we were able to recover baseline accuracy on ResNet18/ImageNet for up to 4 local steps (see Figure 2(a)); 3) the impact of quantization on convergence, where we were able to recover accuracy when applying 8-bit model quantization to Swarm. We encourage the reader to examine the full experimental report in the Appendix, which contains data on these experiments, as well as additional ablation studies. Discussion. Generally, the performance of SwarmSGD appears to be slighly superior to previous decentralized methods (see Figure 1 for an illustration, and Figure 2(b) for a performance breakdown). We investigated this advantage, and found that the per-step communication cost of Swarm, without quantization, is similar to AD-PSGD; however, our algorithm benefits from the reduction in communication frequency: nodes communicate at least 2x less often, and therefore incur lower average communication cost. In particular, a closer examination of the average batch times in Figure 2(b) shows that time per node per batch (including communication and computation) is largely constant as we increase the number of nodes, which suggests good scaling behaviour. The main disadvantage of Swarm is that, similar to previous decentralized methods, it may need additional data passes in order to fully recover accuracy at high node counts. However, we also note that our method did not benefit from the high level of hyperparameter tuning applied to large-batch SGD, e.g. Goyal et al. [2017]. We find it interesting that this accuracy issue is less prevalent in the context of large, over-parameterized models, such as the Transformer, where Swarm can be a viable alternative to large-batch SGD within the same number of epochs. 6 Conclusions and Future Work We analyzed the convergence of SGD in an extremely decoupled model of distributed computing, in which nodes mostly perform independent SGD updates, interspersed with intermittent pairwise averaging steps, which may be performed in an inconsistent and noisy manner. We showed that SGD still converges in this restrictive setting, even under consistency relaxations. Empirical results complement our analysis, showing that this method can outperform previous decentralized algorithms, and can even be competitive against large-batch SGD for very large models. A natural extension would be to generalize the bounds to arbitrary communication graphs. From the practical perspective, one extension would be to reduce the additional training epochs, and to experiment on large-scale decentralized testbeds. Acknowledgments and Disclosure of Funding We gratefully acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML). PD partly conducted this work while at IST Austria and was supported by the European Union’s Horizon 2020 programme under the Marie Skłodowska-Curie grant agreement No. 754411. SL was funded in part by European Research Council (ERC) under the European Union’s Horizon 2020 programme (grant agreement DAPP, No. 678880, and EPiGRAM-HS, No. 801039).
1. What is the focus of the paper regarding decentralized machine learning? 2. What are the strengths of the proposed algorithm, particularly in terms of convergence and scalability? 3. What are the weaknesses of the paper, especially regarding theoretical analysis and experimentation? 4. Do you have any concerns about the novelty and practicality of the method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The paper introduces SwarmSGD, a novel method that makes use of a graph of computational workers without central synchronisation. To minimize synchronisation overhead but maintain consistency between the workers' versions of the model, asynchronous, non-blocking random peer-to-peer averaging takes place every few local gradient steps, and quantization is supported. The paper then provides theoretical analysis (under fairly restrictive assumptions), illustrating in which regime this extremely decentralised algorithm can be expected to converge (in average). Finally, a thorough experimental section on significant benchmarks is run, demonstrating the actual potential of the method. Review Generally speaking, the paper is clearly written, and is easy to follow. The authors provide valuable insights both in the description of the methods themselves, in the theoretical results and (perhaps most importantly) in their proofs. The Discussions part are very appreciated, and this type of analysis should become a staple for this type of paper. The presented algorithm is novel and presents some very interesting characteristics, leading to potential gains in real-world ML workloads. The experimental section contains very large scale experiments (at least for a paper in this theme), and demonstrate real potential. There remains a few weaknesses in the paper, which I believe can be addressed with a bit more work. The first one is some clarity issues. In the initial description of the algorithm, it's hard to understand whether the general model is quantized or not. (l65 does not reference the crucial dequantization step before applying the gradient steps) l85 'convergence for non-convex objectives' is not defined, making the statement harder to interpret. some assumptions are not explicit: being able to simply replace T by nT in the bound relies on the strong assumption that computing gradients on different devices takes the same time. how expensive the quantization/dequantization steps are compared to computing gradients is not explicited. That would be quite helpful to assess how useful quantization is. that the optimal speedup is O(sqrt(n)) is not explained. Typically, using n machines one expects the optimal speedup to be O(n). This is the case even when analyzing linearly convergent algorithms. If we were to instead rely on the dependency in T, the optimal speedup there would be O(exp(n)). The second issue is that there appears to be some overclaiming in terms of the theoretical results. While the paper promises non-blocking updates, these are later supposed to be atomic. Similarly, the assumption l254 that Xi_t and Enc(Xi_t) are 'simutaneous' seems like a pretty strong assumption. Of course it simplifies the analysis but the crucial question is: can the proof be carried out without it? If not, then it is an integral assumption, which should be made explicit. The results also suffer from fairly restrictive assumptions. While not unheard of, the necessary bounds are strong restrictions. Finally, perhaps not enough space is devoted to the experimental section. It would be nice to describe the exact models trained in more details. The WMT model is in particular rather elliptically referred to. the WMT results for SGD are quite surprising: why does the 32-devices run converge to a worse performance than the 16 one? It could conceivably be slower, but as it's got a bigger batch size should converge to a better end BLEU performance BLEU is referred to several times as 'accuracy'. It's not an accuracy, it's a score. for machine translation tasks using transformers, practitioners typically do not use SGD but Adam instead. Is there a clear path to SwarmAdam? All told, this is a nice paper, whose potential is not quite fully exploited in its current form. I recommend acceptance, and would be willing tu upgrade my score depending on the authors' response.
NIPS
Title Asynchronous Decentralized SGD with Quantized and Local Updates Abstract Decentralized optimization is emerging as a viable alternative for scalable distributed machine learning, but also introduces new challenges in terms of synchronization costs. To this end, several communication-reduction techniques, such as non-blocking communication, quantization, and local steps, have been explored in the decentralized setting. Due to the complexity of analyzing optimization in such a relaxed setting, this line of work often assumes global communication rounds, which require additional synchronization. In this paper, we consider decentralized optimization in the simpler, but harder to analyze, asynchronous gossip model, in which communication occurs in discrete, randomly chosen pairings among nodes. Perhaps surprisingly, we show that a variant of SGD called SwarmSGD still converges in this setting, even if non-blocking communication, quantization, and local steps are all applied in conjunction, and even if the node data distributions and underlying graph topology are both heterogenous. Our analysis is based on a new connection with multi-dimensional load-balancing processes. We implement this algorithm and deploy it in a super-computing environment, showing that it can outperform previous decentralized methods in terms of end-to-end training time, and that it can even rival carefully-tuned large-batch SGD for certain tasks. 1 Introduction Decentralized optimization has recently emerged as a promising approach for scaling the distributed training of machine learning models, in particular via stochastic gradient descent (SGD) [Lian et al., 2017, Tang et al., 2018, Koloskova et al., 2019a]. Its key advantage is that it removes the need for a central coordinator node in distributed training, and therefore it can allow for high scaling. The general decentralized optimization setting is the following: we are given n nodes, each with a subset of data from some distribution, which can communicate over some underlying graph topology. In each global round, each node samples some local data, performs a local gradient step, and it is paired with a neighbor, which may be chosen randomly. The nodes exchange model information pairwise, and then update their models, often via direct model averaging. Variants of this setting have been analyzed since pioneering work by Tsitsiklis [1984], for various estimation and optimization algorithms [Xiao and Boyd, 2004, Nedic and Ozdaglar, 2009, Johansson et al., 2009, Shamir and Srebro, 2014] and have seen renewed interest given its applicability to training deep neural networks (DNNs) at scale, e.g. [Lian et al., 2017, 2018, Assran et al., 2018]. Recently, there has been significant focus on reducing the synchronization overheads for decentralized training, usually employing three approaches: 1) implementing faster non-blocking communication between communication partners at a round [Lian et al., 2018, Assran et al., 2018], which may cause them to see stale versions of their models, 2) allowing nodes to take local steps in between 35th Conference on Neural Information Processing Systems (NeurIPS 2021). their communication rounds [Wang and Joshi, 2018, Koloskova et al., 2020], and 3) applying quantization to the communication [Lu and De Sa, 2020, Tang et al., 2018, Koloskova et al., 2019a,b]. The above impressive line of work contributes a rich set of algorithmic and analytic ideas; however, one common limitation is that the algorithms are usually set in the synchronous gossip model, which requires all nodes to perform their communication in lock-step rounds, and share a common notion of time, thus reducing their practicality. To mitigate this fact, some references, e.g. [Lian et al., 2018, Assran et al., 2018, Lu and De Sa, 2020] partially relax this requirement, although they do so at the cost of additional assumptions, or reduced guarantees, as we discuss in related work. Another relative limitation is that the analyses are usually customized to the bespoke communication-reduced methods being applied, and therefore are hard to generalize to other methods. Our Contribution. In this paper, we consider decentralized SGD-based optimization in the simpler, but harder to analyze, asynchronous gossip model [Xiao and Boyd, 2004], in which communication occurs in discrete, randomly chosen pairings among nodes, and does not require a common notion of time. We prove that a new variant of SGD we call SwarmSGD converges in this setting, even though it supports all three communication-reduction approaches mentioned above in conjunction. Our analysis generalizes to heterogeneous data distributions and communication topologies. At a high level, SwarmSGD works as follows. Each node i maintains a local model estimate Xi based on which gradients are generated, and a shared buffer where quantized models are stored for communication with other nodes. In each step, node i first computes a sequence of H local gradient steps, which it does not yet apply. Next, the node chooses communication partner j, uniformly at random among its neighbors. Then, node i reads from its own communication buffer and from the communication buffer of j, obtaining quantized models Qi and Qj . A subtlety here is that Qi is not necessarily the quantized version of the model Xi, since other nodes can write concurrently to i’s buffer. The node i then averages Qi with Qj , and updates the neighbor’s remote buffer to the quantized average. Finally, it applies its local gradient steps to the resulting average, adopts this as its next model Xi, and a writes quantized version of it in its own shared buffer. This procedure can be implemented in a deadlock-free, non-blocking manner, by using either shared-memory or the remote direct-memory access (RDMA) calls supported by MPI [Woodall et al., 2006]. Importantly, the communication partner j does not need to block its computation during communication, and may be contacted by more than one interaction partner during a single local step, although we do assume that individual reads and writes are performed atomically. A key component of this procedure is the quantization scheme: directly using an unbiased quantizer, e.g. [Alistarh et al., 2017] would destroy convergence guarantees, as the quantization error would be proportional to the model norm, which may not be bounded. Instead, we use a customized variant of the quantization scheme of Davies et al. [2021], whose error depends on the distance between the point being quantized (the model), and an arbitrary reference point, provided as a parameter. We prove that each node can reliably use its own model as a reference point to quantize and de-quantize messages placed in its buffer by other nodes. In turn, this requires care in the analysis. Specifically, the key observation behind our analysis is exactly in showing that the nodes’ local models stay well-enough concentrated around their mean throughout optimization to allow for correct decoding of quantized models, which in turn implies joint convergence by the nodes towards a point of vanishing gradient. This concentration follows via a non-trivial super-martingale argument. If nodes take a constant number of local SGD steps between communication steps, then SwarmSGD has Θ( √ n) speedup to convergence for non2-convex objectives. This matches results from previous work which considered decentralized dynamics but with global synchronization [Lian et al., 2017]. Experimental Validation. We apply SwarmSGD to train deep neural networks on image classification and machine translation (NMT) tasks, deployed on the Piz Daint supercomputer [Piz, 2019]. Experiments confirm the intuition that the average synchronization cost of SwarmSGD per iteration is low: it stays at less than 10% of the batch computation time, and remains constant as we increase the number of nodes. For example, using SwarmSGD, we are able to train a TransformerXL [Vaswani et al., 2017] model on WMT17 (En-Ge) 1.5× faster than a highly-optimized largebatch SGD baseline, and to slightly higher accuracy, without additional hyper-parameter tuning. At the same time, due to the reduced communication frequency, Swarm also improves upon the speed of the previous practical decentralized methods, e.g. [Lian et al., 2017, 2018, Assran et al., 2018]. Importantly, we also note that, in less overparametrized settings such as training residual CNNs [He et al., 2016] on ImageNet [Russakovsky et al., 2015], nodes do need to perform more iterations over the dataset relative to the baseline in order to recover full accuracy. This is predicted by the analysis, and confirms similar findings in previous work [Assran et al., 2018]. Overall, our method does appear well-suited to training large modern models at node counts where global synchronization among all nodes is prohibitively expensive. Related Work. Decentralized optimization has a long history [Tsitsiklis, 1984], and is related to the study of gossip algorithms, e.g. [Kempe et al., 2003, Xiao and Boyd, 2004, Boyd et al., 2006]. Gossip is usually studied in one of two models [Boyd et al., 2006]: synchronous, structured in global rounds, where each node interacts with a randomly chosen neighbor, forming a matching, and asynchronous, where each node wakes up at random times, e.g. given by a Poisson clock, and picks a random neighbor to interact with. Several classic optimization algorithms have been analyzed in the asynchronous gossip model [Nedic and Ozdaglar, 2009, Johansson et al., 2009, Shamir and Srebro, 2014]. In this paper, we focus on analyzing decentralized SGD in this model. As mentioned, the growing line of work on decentralized optimization for machine learning has mostly focused on variants of the synchronous gossip model. Specifically, Lian et al. [2017] considered this setting in the context of DNN training, while and Tang et al. [2018] and Koloskova et al. [2019b] also analyzed decentralized optimization with quantization in the synchronous model. Wang and Joshi [2018] and Koloskova et al. [2020] provided analysis frameworks for synchronous decentralized SGD with local updates, and possibly changing topologies. Lian et al. [2018] and Assran et al. [2018] focused specifically on reducing synchronization costs in this setting, and proposed algorithms with partially non-blocking communication, in which nodes may read a stale version of the interaction partner’s information, modelling e.g. a communication buffer. However, the maximum staleness must be bounded by a global variable τ , which must be enforced throughout the execution. As observed by Assran et al. [2018], enforcing this bound can cause blocking, and therefore the authors of these works propose to implement a relaxed roundbased model, in which nodes interact once per round in perfect matchings. Their algorithms provide O(1/ √ Tn) convergence rates, under analytical assumptions. Upon careful examination, we find that their analysis approach can be extended to the asynchronous gossip model we consider, by defining the “contact matrices” to correspond to pairwise interactions. However, this introduces two significant limitations. First, the analysis will not support local gradient updates to models nor quantized communication. If we remove these practical relaxations, our technique yields better bounds, as our potential analysis is specifically-tailored to this dynamic interaction model. Second, as we detail in the Appendix, some of their technical conditions imply existence of global synchronization. For Assran et al. [2018], as we detail in the Appendix, their analysis would not guarantee any non-trivial speedup due to parallelization in the asynchronous gossip model. We describe these issues in detail and present a systematic comparison in Appendix A. Lu and De Sa [2020] provided a novel approach to analyze decentralized SGD with quantization and limited asynchrony: specifically, their algorithm requires blocking communication, i.e. nodes have to synchronize explicitly during interactions, but may see old versions of eachothers’ models. More precisely, during each interaction, both parties are responsible for updating their local models, meaning that once node is woken up (we call it initiator node) and chooses interaction partner it has to block until the partner is woken up as well. In our case, initiator can update both its local model and the local model of its partner and proceed to the next step without blocking. Koloskova et al. [2019a] use a similar update rule in the synchronous model. Zhang and You [2021] recently proposed a decentralized algorithm which is fully-asynchronous as long as node activation rates and message delays are bounded. As noted earlier, bounding activation rates does imply blocking; however, tolerating (bounded) message delays does improve over our approach of updating models using atomic writes. The setting further differs in that they assume that nodes compute full (nonstochastic) gradients, as well as that the loss function satisfies the PL condition. In sum, we are the first to explicitly consider the asynchronous gossip model, and the impact of local updates, asynchrony, and quantization used in conjunction together with decentralized SGD. Our technique is new, relies on a fine-grained analysis of individual interactions, and can yield improved bounds even in the case where H = 1. Further, our algorithm is the first to allow for both communication-compression and non-blocking communication. From the implementation perspective, the performance of our algorithm matches or improves that of previous methods, notably D-PSGD [Lian et al., 2017], AD-PSGD [Lian et al., 2018] and SGP [Assran et al., 2018]. 2 Preliminaries The Distributed System Model. We consider a model which consists of n ≥ 2 nodes, each of which is able to perform local computation. We assume that communication network of nodes is a graph G with spectral gap λ2, which denotes the second smallest eigenvalue of the Laplacian of G. Let ρmax, ρmin be the maximum and minimum degrees in G, respectively. We will focus on densely-connected topologies, which model supercomputing and cloud networks: for instance, the standard Dragonfly topology [Kim et al., 2008, Besta and Hoefler, 2014] is regular, densely connected and low-diameter, mimicking regular expanders. The execution is modelled as occurring in discrete steps, where in each step a new node (the “initiator”) is sampled, and can then contact one of its neighbors (the “responder”) uniformly at random. (At the algorithm level, the initiator is “sampled” once it completes its current computational step, and seeks to interact with a neighbor.) We denote the number of steps for which we run by T . Globally, the communication steps can be seen as a sequence of sampled directed communication edges. Thus, the basic unit of time is a single pairwise interaction between two nodes. Notice however that in a real system Θ(n) of these interactions could occur in parallel. Thus, the standard global time measure is parallel time, defined as the total number of interactions divided by n, the number of nodes. Parallel time intuitively corresponds to the average number of interactions per node until convergence. This model is identical to the asynchronous gossip model [Xiao and Boyd, 2004], and to the population protocol model [Angluin et al., 2006]. Stochastic Optimization. We assume that the agents wish to jointly minimize a d-dimensional, differentiable function f : Rd → R. Specifically, we will assume the empirical risk minimization setting, in which agents are given access to a set of m data samples S = {s1, . . . , sm} coming from some underlying distribution D, and to functions `i : Rd → R which encode the loss of the argument at the sample si. The goal of the agents is to converge on a model x∗ which minimizes the empirical loss over the m samples, that is x∗ = argminxf(x) = argminx(1/m) ∑m i=1 `i(x). We assume that each agent i has a local function fi associated to its fraction of the data, i.e ∀x ∈ Rd: f(x) = ∑n i=1 fi(x)/n. Agents employ these samples to run a decentralized variant of SGD, described in detail in the next section. For this, we will assume that each agent i has access to unbiased stochastic gradients g̃i of the function fi, which are functions such that E[g̃i(x)] = ∇fi(x). Stochastic gradients can be computed by each agent by sampling i.i.d. the distribution D, and computing the gradient of f at θ with respect to that sample. Our analysis also extends to the case where each agent is sampling from its own partition of data. We assume the following conditions about the objective function, although not all our results require the second moment bound: 1. Smooth Gradients: The gradient ∇fi(x) is L-Lipschitz continuous for some L > 0, i.e. for all x, y ∈ Rd and agent i: ‖∇fi(x)−∇fi(y)‖ ≤ L‖x− y‖. (1) 2. Bounded Variance: The variance of the stochastic gradients is bounded by some σ2 > 0, i.e. for all x ∈ Rd and agent i: E ∥∥∥g̃i (x)−∇fi (x)∥∥∥2 ≤ σ2. (2) 3. Bounded Local Function Variance: There exists ς2 > 0, such that for all x ∈ Rd: n∑ i=1 ∥∥∥∇f (x)−∇fi (x)∥∥∥2 n ≤ ς2. (3) 4. Bounded Second Moment: The second moment of the stochastic gradients is bounded by some M2 > 0, i.e. for all x ∈ Rd and agent i: E ∥∥∥g̃i (x)∥∥∥2 ≤M2. (4) Note that throughout this paper for any random variable X , by E‖X‖2 we mean E[‖X‖2]. Each node has a communication buffer, which, for simplicity, we assume can be read and written atomically by each node; Importantly, buffers can only hold quantized quantized vectors. Quantization Procedure. We use a quantization function which follows from Lemma 23 in (the full version of) Davies et al. [2021]. Corollary 2.1. (Quantization for Communication Buffers) Fix parameters R and > 0. There exists a quantization procedure defined by an encoding function EncR, : Rd → {0, 1}∗ and a decoding function DecR, = Rd × {0, 1}∗ → Rd such that, for any vector x ∈ Rd which we are trying to quantize, and any vector y which is used by decoding, which we call the decoding key, if ‖x − y‖ ≤ RRd then with probability at least 1 − log log(‖x−y‖ )O(R −d), the function QR, (x) = DecR, (y,EncR, (x)) has the following properties: 1. (Unbiased decoding) E[QR, (x)] = E[DecR, (y,EncR, (x))] = x; 2. (Error bound) ‖QR, (x)− x‖ ≤ (R2 + 7) ; 3. (Communication bound) To compute DecR, (y,EncR, (x)), only the first B bits of EncR, (x) are needed, where B = O ( d log(R ‖x− y‖ ) ). Proof. Lemma 23 of the full version of Davies et al. [2021] provides similar guarantees as the ones we want to prove, but they assume interactive message-passing communication between an encoding node u and a decoding node v. However, in their setting, the messages sent by u are non-adaptive: u simply sends quantizations using an increasing number of bits, until v replies confirming that it has decoded successfully. The number of bits sent during communication is upper bounded by O ( d log(R ‖x− y‖ ) ), where x is a vector node u is sending and y is vector node v is using for decoding. In our setting, we use communication buffers which, so node u can simply append all of its potential messages together as QR, (x). Critically, notice that node u should append enough bits so that the decoding is possible (Since in our setting there is no way for v to acknowledge that it received enough number of bits). This can be done in two ways. If u knows the distance between x and y. then u can simply write O ( d log(R ‖x− y‖ ) bits in the register. In the second case, u does not know the distance. Let T be the total number of times nodes communicate throughout our algorithm. We will show that with high probability all distances between encoded and decoding vectors will be at most T 17 R (dependence on T stems from the fact that we wish to show an upper bound with high probability, please see Lemma B.19 in the Appendix), and therefore at most O(d log T ) bits for quantization will suffice in the worst case. Thus, the node writes O(d log T ) bits in the register , but when v tries to decode, it does not need all those bits: it reads and uses only the first O ( log(R ‖x− y‖ ) bits. Counting Communication Cost. We emphasize that, when we calculate the number of bits needed by quantization we actually aim to measure the number of bits exchanged between u and v. In the setting we consider, which has local registers/communication buffers, this is the number of bits spent to read from (or to write to) the non-local register. Since the second case above involves writing a relatively large number of bits, we will use it only when u is writing a quantized value to its own register/buffer, and so does not need to communicate the bits. Then, only the O ( log(R ‖x− y‖ ) bits read by v need to be communicated. To summarize, in our algorithm we will always ensure that whenever some node uwrites a quantized value, it either knows the key which will be used for decoding it, or is writing to its local register. 3 The SwarmSGD Algorithm We now describe a decentralized variant of SGD, designed to be executed by a population of n nodes, interacting over the edges of communication graph G, as described above. The algorithm proceeds in individual communication steps, where in each step a node which has completed its local computation, seeks a random neighbor to communicate with. We will alternatively say that node gets activated (once it finished computation) and then becomes initiator of the interaction. The Communication Registers. The communication buffer of each node i consists of two registers: one containing an encoded version of its own (possibly outdated) model, which will only be written to by node i itself, and one for holding an encoded version of its current model, which will only be written to by other nodes. (This second register can be seen as a “communication queue” for the nodes wishing to communicate with i.) Initially all registers contain zero vectors. Parallel Execution. For simplicity we will skip the details of quantization, and assume that nodes write and read quantized models directly, without encoding and decoding steps. Both current and outdated models are zero vectors initially. Each node i computes a number of local gradients based on its last model view, and other nodes may update this model while i is computing. Hence, only after node i is done with computing local gradients does it read its updated current model. Let X̂i be the value of the outdated model and let Xi be the value of the current model. Node i computes the average of quantized models Q(Xi)+Q(Xj)2 and writes it in a register which contains current model of node j. Next, it computes Q(Xi)+Q(Xj)2 − ηh̃(X̂i) (where η is a learning rate and h̃(X̂i) is a sum of local gradients), and writes it in both of its local registers, one containing the current model and one containing the outdated model. Once the write is finished, it again proceeds to compute local gradients, based on the view Q(Xi)+Q(Xj)2 − ηh̃(X̂ i t). Sequential model. For the analysis, it is useful to map these parallel interactions to a sorted sequence of sequential ones. Thus, time steps track the interactions between agents, and each interaction consists of random number of local steps steps which activated node performs, plus one averaging step where activated node (or initiator node) contacts its random neighbour. The analysis assumes that nodes get activated randomly, by independent Poisson clocks, which leads to a uniform global sampling distribution. In practice, this could be approximated by having the number of local gradient steps executed by each node be a geometric random variable of mean H . For the sake of practicality, our experiments will take H to be a small constant, instead of a random variable, which yields similar results. The pseudocode from the point of view of a single node i which was activated at step t+ 1 is given in Algorithm 1. For t ≥ 0, let Enc(X̂it) and Enc(Xit) be the values written in the registers containing the outdated and the current model of agent i after t steps, respectively. That is, Xit is the current model of agent i and X̂it is the outdated model. The Communication Procedure. Since i was activated at step t we will assume that it has already computedHi local gradients using the outdated model X̂it , whereHi is a geometric random variable with mean H , as follows. Let h̃0i (X̂ i t) = 0 d; for indices 1 ≤ q ≤ Hi, let h̃qi (X̂it) = g̃i(X̂it −∑q−1 s=0 ηh̃ s i (X̂ i t)) be the q-th local gradient. Then, let h̃i(X̂ i t) = ∑Hi q=1 h̃ q i (X̂ i t) be the sum of all computed local gradients. Or alternatively, since we are in a sequential setting, we can assume that i does computation at step t. First, i retrieves Q(Xit) (the quantized version of its current model), by decoding Enc(Xit) using key Q(X̂ i t). We would like to note that i can obtain Q(X̂ i t) simply by decoding Enc(X̂it), using key X̂ i t (which it knows, to full precision, since it calculated the value itself), and this step does not cost any communication bits since all of the terms involved are local to i’s registers. Then, it contacts its interaction partner j. Node i calculates Q(X̂jt ) by decoding Enc(X̂ j t ), again using X̂it as a key, and then it retrieves Q(X j t ) by decoding Enc(X j t ) with key Q(X̂ j t ). Then, i calculates Xit+1 = Q(Xit) 2 + Q(Xjt ) 2 − ηh̃i(X̂ i t) and X j t+1 = Q(Xjt ) 2 + Q(Xit) 2 . Next, node i calculates Enc(Xit+1) and writes to its own register for its outdated models. Here, we use the first case for quantization using Corollary 2.1: i is not aware of the key that other nodes will use for decoding, but since it is writing to its own local register, it can afford to use the worst-case O(d log T ) bits. Additionally, it writes Enc(Xit+1) to its own register containing current model, so that there are enough bits for Q(X̂it+1). (Note that X̂ i t+1 = X i t+1 has to be used as decoding key.) Finally, it calculates Enc(Xjt+1) and writes it in the register which contains the current model of j, using enough bits that it can be decoded using Q(X̂jt+1) (we have that X̂ j t+1 = X̂ j t ) . Notice that, the way our algorithm is specified, every node which tries to decode Enc(Xjt+1) will use Q(X̂ j t+1) as a key (which i knows), hence Corollary 2.1 holds in this case as well. We emphasize the fact that all this communication is one-way, as it does not require j’s intervention. By Corollary 2.1 the total number of bits used is : O ( d log( R ‖X̂it − X̂ j t ‖) ) +O ( d log( R ‖Q(X̂jt )−X j t ‖) ) +O ( d log( R ‖Q(X̂jt )−X j t+1‖) ) . (Recall that we count only reading and writing to other registers, and do not count operations i performs on its own registers.) We will show that we can make the probability of any instance of quantization failing less than 1/T c, for some sufficiently large constant c, by setting the constant factor in the number of bits sufficiently high. Then, we can take a union bound over all instances of quantization throughout the algorithm, to show that none fail with high probability in T . Henceforth, we will then be able to prove the convergence of our algorithm conditioned on this event. Avoiding race conditions. An interesting question is what happens when multiple nodes contact j concurrently. For conciseness, our pseudocode assumes that the update sequence in lines 8– Algorithm 1 Sequential SwarmSGD pseudocode for each interaction between nodes i and j. 1: % Let G be a communication graph. 2: % Initial models X10 = X20 = ... = Xn0 3: for t = 0 to T − 1 do 4: Sample the initiator node i uniformly at random. 5: Node i samples a node j, adjacent to it in G, uniformly at random. 6: Let t− τ it be the last step at which node i was chosen as initiator. 7: Let X̂it = Xit−τit be its model from that step. 8: Q(Xit)← Dec(Q(X̂it), Enc(Xit)) 9: Q(X̂jt )← Dec(X̂it , Enc(X̂ j t )) 10: Q(Xjt )← Dec(Q(X̂ j t ), Enc(X j t )) 11: Xit+1 ← Q(Xit)/2 +Q(Xjt )/2− ηh̃i(X̂it−1) 12: Xjt+1 ← Q(Xit)/2 +Q(X j t )/2 13: Write Enc(Xit+1) to the registers containing current and outdated models of node i 14: Write Enc(Xjt+1) to the register containing current model of node j 15: For k 6= i, j, Xkt+1 = Xkt . 16: end for 14 happens atomically, but this sequence can cause a data race. To mitigate this, we can use a bounded non-blocking queue [Michael and Scott, 1996] at each node instead of a single buffer. Thus, instead of updating the buffer value atomically, each node simply appends the corresponding quantized model mean to j’s communication queue. In practice, this queue is extremely unlikely to be contended, since communication collisions are rare. 4 The Convergence of SwarmSGD Let µt = ∑n i=1X i t/n be the mean over node models at time t. Our main result is the following: Theorem 4.1. Assume the total number of steps T ≥ 10n, learning rate η = n/ √ T , quantization parameters R = 2 + T 3 d and = ηHM(R2+7) . Then, with probability at least 1 − O( 1 T ) we have that Algorithm 1 converges at rate 1 T T−1∑ t=0 E‖∇f(µt)‖2 ≤ 2(f(µ0)− f(x∗)) H √ T + 6(σ2 + 6Hς2)√ T + 12HM2√ T + C n2ρ3maxH 2L2M2 Tρminλ22 , for constant C, and uses O ( d log ( ρ2max ρminλ2 ) + log T ) expected communication bits per step. Discussion. First, this notion of convergence is standard in the non-convex case [Lian et al., 2015, 2017, 2018], and each of the upper bound terms has an intuitive interpretation: the first represents the reduction in loss relative to the initialization, and gets divided by the number of local steps H , since progress is made in this term in every local step; the second represents the noise due to stochasticity, and is naturally linear in H , as H steps are taken in expectation between two interactions. (Recall that in our model T is the number of interactions, and TH is the expected number of gradient steps.) The fourth term encodes overheads caused by local steps, quantization, and graph structure; however, it is usually seen as negligible (cf. [Lu and De Sa, 2020]), due to division by T . The third term is the critical one, as it implies a dependence on the second-moment bound. Intuitively, this term appears because our algorithm combines both non-blocking communication, and quantization: first, unlike prior work, we do not assume an explicit delay upper bound τ on communication; in conjunction with quantization, the unbounded delay this implies that our estimate on the model average µt may become dependent on M for large delays, which causes this dependency. While this limitation appears inherent, we are able to remove it if we eliminate quantization: in this case, we get a negligible dependency on M . We formalize this in Corollary 4.2. Second, if we focus on the total number of steps to reach some error bound, we notice an interesting trade-off between the linear reduction in H in the first term, due to local steps, and the linear increase in H in the other terms. Notice that, for dense and low-diameter graphs, such as the regular expanders popular in cluster networks, our convergence bound has no dependence in the graph parameters, and communication is linear in d. However, one limitation is that we could have a log n dependency in the communication for highly-irregular and poorly-connected graphs. Finally, note that time T here counts total interactions. However, Θ(n) pairwise interactions occur independently in parallel, and so we can slightly abuse notation and replace T by nT in the above formula, to obtain optimal Θ( √ n) speedup in terms of wall-clock time. Yet, this speedup is dampened by the variance due to noisy local gradient steps, a fact which we will revisit in the experimental section. Proof Overview. At a high level, the argument rests on two technical ideas. The first is that, in spite of noise and local steps, the nodes’ parameters remain concentrated around the mean µt. The second is to leverage this, and bound the impact of stochastic noise and model staleness on convergence. In particular, the main technical difficulty in the proof is to correctly “encode” the fact that parameters are well concentrated around the mean. A natural approach is to bound the model variance Γt after t interactions. Formally, we define Γt = ∑n i=1 ‖Xit − µt‖2, where µt = ∑n i=1X i t/n, as before. We bound the expected evolution of Γt over time, depending on the learning rate, number of local steps, quantization parameter and the bound provided by the assumption on the stochastic gradients (the bound M2). The critical point is that the upper bound on the expectation of Γt does not depend on the number of interactions t. More precisely, if all the above hyper-parameters are constant, we get that E[Γ(t)] = O(n). Our approach brings over tools from classic load-balancing [Berenbrink et al., 2009], to the multi-dimensional case. Three key elements of novelty in our case are that (1) for us the load balancing process is dynamic, in the sense that new loads, i.e. gradients, get continually added; (2) the load-balancing process we consider is multi-dimensional, whereas usually the literature considers simple scalar weights; (3) the models can be outdated and quantized, which leads to a complex, noisy load-balancing process. We resolve the this third and most challenging issue by using carefully-defined auxiliary potentials. Removing the Second-Moment Bound. Upon reflection, we notice that can render the dependency on M2 negligible if we do not use quantization, but otherwise keep the algorithm the same: Corollary 4.2. Given the previous assumptions and learning rate η = n/ √ T , for some constant C, we have that the Algorithm 1 where quantization is the identity converges at rate 1 T T−1∑ t=0 E‖∇f(µt)‖2 ≤ 2(f(µ0)− f(x∗)) H √ T + 6(σ2 + 6Hς2)√ T + Cn2ρ3maxH 2L2M2 Tρminλ22 . Notice that in this case all the term containing the second moment bound M2 is dampened by a factor of 1T , hence we can assume that Algorithm 1 converges at close-to optimal rate O ( 2(f(µ0)−f(x∗)) H √ T + 6H(σ 2+6ς2)√ T ) . This result still improves upon previous analyses [Lian et al., 2018, Assran et al., 2018, Lu and De Sa, 2020] in the sense that communication is completely nonblocking (there is no τ ), and we allow for local steps. Further, in the absence of quantization and assuming that the nodes perform single local gradient step, we can entirely remove assumption (4), when T is large enough (e.g for the fully connected graph we will need T ≥ Ω(n3)). More precisely, we can attain convergence rate ofO ( f(µ0)−f(x∗)√ T + (σ2+ς2)√ T + n3ρ4maxL 2(σ2+ς2) Tρminλ32 ) .We leave the proof of this last extension to the full version of this work. 5 Experimental Results In this section, we validate our analysis, by applying the algorithm to training deep neural networks for image classification and machine translation. We map the algorithm onto a multi-node supercomputing setting, in which we have a large number of compute nodes, connected by fast communication links. The key overhead in this setting is synchronization: at large node counts, the cost of synchronizing all nodes so they execute in lock-step can be very high, see e.g. [Li et al., 2019] for numerical results on different workloads. Transmission cost also becomes significant at large node counts and large model sizes. Decentralized methods can mitigate this overhead, since nodes synchronize only sporadically and in pairs. Target System and Implementation. We run SwarmSGD on the CSCS Piz Daint supercomputer, which is composed of Cray XC50 nodes, each with a Xeon E5-2690v3 CPU and an NVIDIA Tesla P100 GPU, using a state-of-the-art Aries interconnect over a Dragonfly network topology, which is regular. Please see Piz [2019] for more details. We implemented SwarmSGD in Pytorch and TensorFlow using MPI-based primitives, with non-blocking averaging. The Pytorch implementation is on top of SGP framework [Assran et al., 2018], and uses SwarmSGD to train ResNets on the CIFAR- 10/100 Krizhevsky et al. [2014] and ImageNet [Russakovsky et al., 2015] datasets, while we use the TensorFlow implementation to train the original version of the Transformer-XL model [Vaswani et al., 2017] on the WMT17 (En-Ge) dataset. All algorithms use the same topology overlay, which is fully-connected: according to their theory and experiments, this well-connected overlay should maximize convergence speed. SGP was run with overlap factor 1, following Assran et al. [2018]. Training Process. Our training methodology follows data-parallel training, with some differences due to decentralization, and is identical to previous work on decentralized and local SGD, e.g. Lian et al. [2017], Assran et al. [2018], Lin et al. [2018]. Training proceeds in epochs, each of which corresponds to processes collectively performing a full pass over the dataset. At the beginning of each epoch, we re-shuffle the dataset and partition it among processes [Lin et al., 2018]. As noted in previous work [Lian et al., 2017, 2018, Assran et al., 2018] variants of decentralized SGD are not always able to recover sequential SGD accuracy within the same number of epochs as this baseline. This is justified theoretically, as the slower mixing can affect convergence, but also intuitively, as each model sees significantly fewer updates per epoch. Thus, we will allow the decentralized schemes to execute for more epochs, by a constant multiplier factor between 1 and 3. To reduce multipliers, we experimented with SlowMo [Wang et al., 2019]; we found that it improved results across methods on CIFAR-10, but not at ImageNet scale; therefore, the provided results do not include it. Once we have fixed the number of epochs, we do not alter the other training hyperparameters: in particular, the learning rate schedule, momentum and weight decay terms are identical to the standard values for sequential SGD, for each individual model. Accuracy and Speed. We first examined whether SwarmSGD can in fact recover full accuracy versus the sequential or large-batch SGD baselines. In Table 1 we provide an overview of parameter values to recover large-batch SGD accuracy (following Goyal et al. [2017]) using SwarmSGD, on the ResNet, ImageNet and CIFAR tasks. We execute for 32 nodes on ImageNet, and 8 nodes on CIFAR-10. (Local batch sizes are 128 for ResNet20 and ResNet50, and 128 for ResNet18. Quantization is not applied in these experiments.) The results show that Swarm can recover or slightly exceed the accuracy of the large-batch baselines, and that it has lower practical communication cost relative to existing methods (see Figure 2(b), where we separate the average computation cost per batch). However, Swarm requires significant additional passes over the data (up to 2.7×) to achieve full accuracy, which negates its performance benefits in this specific setting, relative to large-batch SGD. (Please see the Supplementary for end-to-end time comparisons.) This partly negative finding is in line with previous work on decentralized methods [Assran et al., 2018]. Next, we examine accuracy for the WMT17 task. The results are provided in Figure 1(a), in accuracy-vs-time format, for 16 and 32 nodes, executing for 10 global epochs. Here, the large-batch SGD (LB-SGD) baseline (BLEU score 26.1 at 16 nodes) is a poor alternative at high node counts due to model size: its throughput is low, and drops catastrophically at 64 nodes due to the network becoming severely bandwdith-bottlenecked (see Figure 1(b)). At 16 nodes, Swarm slightly exceeds the baseline accuracy at 26.17 BLEU, for an end-to-end speedup of ∼ 1.5×. In the same setting, Swarm outperforms all other decentralized methods (the fastest previous method, AD-PSGD, is 30% slower, and less accurate), both in terms of BLEU score, and in terms of end-to-end time.(The objective loss graph is similar, and is provided in the Appendix). At 32 nodes, all decentralized methods reach lower scores (∼ 23.5) after 10 epochs. However, we observed experimentally that running Swarm for an additional 5 epochs (multiplier 1.5) at 32 nodes recovered a BLEU score of ∼ 25.72, which is 30% faster than the 16-node version in terms of end-to-end time (omitted for visibility). In addition, we investigated 1) the accuracy of the real average of all models throughout training: it is usually more accurate than an arbitrary model, but not significantly so, corroborating the claim that individual models tend to stay close to the mean; 2) the influence of the number of local steps on accuracy: perhaps surprisingly, we were able to recover baseline accuracy on ResNet18/ImageNet for up to 4 local steps (see Figure 2(a)); 3) the impact of quantization on convergence, where we were able to recover accuracy when applying 8-bit model quantization to Swarm. We encourage the reader to examine the full experimental report in the Appendix, which contains data on these experiments, as well as additional ablation studies. Discussion. Generally, the performance of SwarmSGD appears to be slighly superior to previous decentralized methods (see Figure 1 for an illustration, and Figure 2(b) for a performance breakdown). We investigated this advantage, and found that the per-step communication cost of Swarm, without quantization, is similar to AD-PSGD; however, our algorithm benefits from the reduction in communication frequency: nodes communicate at least 2x less often, and therefore incur lower average communication cost. In particular, a closer examination of the average batch times in Figure 2(b) shows that time per node per batch (including communication and computation) is largely constant as we increase the number of nodes, which suggests good scaling behaviour. The main disadvantage of Swarm is that, similar to previous decentralized methods, it may need additional data passes in order to fully recover accuracy at high node counts. However, we also note that our method did not benefit from the high level of hyperparameter tuning applied to large-batch SGD, e.g. Goyal et al. [2017]. We find it interesting that this accuracy issue is less prevalent in the context of large, over-parameterized models, such as the Transformer, where Swarm can be a viable alternative to large-batch SGD within the same number of epochs. 6 Conclusions and Future Work We analyzed the convergence of SGD in an extremely decoupled model of distributed computing, in which nodes mostly perform independent SGD updates, interspersed with intermittent pairwise averaging steps, which may be performed in an inconsistent and noisy manner. We showed that SGD still converges in this restrictive setting, even under consistency relaxations. Empirical results complement our analysis, showing that this method can outperform previous decentralized algorithms, and can even be competitive against large-batch SGD for very large models. A natural extension would be to generalize the bounds to arbitrary communication graphs. From the practical perspective, one extension would be to reduce the additional training epochs, and to experiment on large-scale decentralized testbeds. Acknowledgments and Disclosure of Funding We gratefully acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML). PD partly conducted this work while at IST Austria and was supported by the European Union’s Horizon 2020 programme under the Marie Skłodowska-Curie grant agreement No. 754411. SL was funded in part by European Research Council (ERC) under the European Union’s Horizon 2020 programme (grant agreement DAPP, No. 678880, and EPiGRAM-HS, No. 801039).
1. What is the focus of the paper regarding distributed decentralized optimization? 2. What are the key components of the proposed asynchronous algorithm? 3. How does the algorithm reduce communication complexity, and what is the theoretical result supporting this aspect? 4. Are there any concerns or suggestions regarding the presentation of the algorithm or its application? 5. Can you provide examples of applicable quantization techniques that would aid in understanding the algorithm's functioning?
Summary Of The Paper Review
Summary Of The Paper In this paper, authors propose the asynchronous algorithm for convex optimization in distributed decentralized model. This algorithm consists of two major components - local model estimation and quantization mechanism. Review In this paper, authors propose the asynchronous algorithm for convex optimization in distributed decentralized model that's main idea is to decrease the communication complexity of the algorithm and figure out the communication bottleneck. The idea of the algorithm is the following: we randomly select some node i in the network graph G and then randomly select one of its neighbors j . After this, for node J we save the average of quantized values for i and j ; however, for node i we also update the model estimator via the local model update. This combination of local updates together with quantization technique allows to gain in communication complexity in comparison with local SGD that was shown in the experimental part of the paper. The presented algorithm is not restricted to some specific quantization techniques since the requirement to the Encoding and Decoding procedure is quite soft. However, it makes harder to follow the intuition without any example of applicable quantizations (only reference to the another article is provided). Due to the Corollary 4.2 the theoretical result is close to the optimal one that is a strong point of the paper. Also, the experimental results show that the SwarmSGD algorithm works well in practice too; however, I found figures too small and hard to read.
NIPS
Title Asynchronous Decentralized SGD with Quantized and Local Updates Abstract Decentralized optimization is emerging as a viable alternative for scalable distributed machine learning, but also introduces new challenges in terms of synchronization costs. To this end, several communication-reduction techniques, such as non-blocking communication, quantization, and local steps, have been explored in the decentralized setting. Due to the complexity of analyzing optimization in such a relaxed setting, this line of work often assumes global communication rounds, which require additional synchronization. In this paper, we consider decentralized optimization in the simpler, but harder to analyze, asynchronous gossip model, in which communication occurs in discrete, randomly chosen pairings among nodes. Perhaps surprisingly, we show that a variant of SGD called SwarmSGD still converges in this setting, even if non-blocking communication, quantization, and local steps are all applied in conjunction, and even if the node data distributions and underlying graph topology are both heterogenous. Our analysis is based on a new connection with multi-dimensional load-balancing processes. We implement this algorithm and deploy it in a super-computing environment, showing that it can outperform previous decentralized methods in terms of end-to-end training time, and that it can even rival carefully-tuned large-batch SGD for certain tasks. 1 Introduction Decentralized optimization has recently emerged as a promising approach for scaling the distributed training of machine learning models, in particular via stochastic gradient descent (SGD) [Lian et al., 2017, Tang et al., 2018, Koloskova et al., 2019a]. Its key advantage is that it removes the need for a central coordinator node in distributed training, and therefore it can allow for high scaling. The general decentralized optimization setting is the following: we are given n nodes, each with a subset of data from some distribution, which can communicate over some underlying graph topology. In each global round, each node samples some local data, performs a local gradient step, and it is paired with a neighbor, which may be chosen randomly. The nodes exchange model information pairwise, and then update their models, often via direct model averaging. Variants of this setting have been analyzed since pioneering work by Tsitsiklis [1984], for various estimation and optimization algorithms [Xiao and Boyd, 2004, Nedic and Ozdaglar, 2009, Johansson et al., 2009, Shamir and Srebro, 2014] and have seen renewed interest given its applicability to training deep neural networks (DNNs) at scale, e.g. [Lian et al., 2017, 2018, Assran et al., 2018]. Recently, there has been significant focus on reducing the synchronization overheads for decentralized training, usually employing three approaches: 1) implementing faster non-blocking communication between communication partners at a round [Lian et al., 2018, Assran et al., 2018], which may cause them to see stale versions of their models, 2) allowing nodes to take local steps in between 35th Conference on Neural Information Processing Systems (NeurIPS 2021). their communication rounds [Wang and Joshi, 2018, Koloskova et al., 2020], and 3) applying quantization to the communication [Lu and De Sa, 2020, Tang et al., 2018, Koloskova et al., 2019a,b]. The above impressive line of work contributes a rich set of algorithmic and analytic ideas; however, one common limitation is that the algorithms are usually set in the synchronous gossip model, which requires all nodes to perform their communication in lock-step rounds, and share a common notion of time, thus reducing their practicality. To mitigate this fact, some references, e.g. [Lian et al., 2018, Assran et al., 2018, Lu and De Sa, 2020] partially relax this requirement, although they do so at the cost of additional assumptions, or reduced guarantees, as we discuss in related work. Another relative limitation is that the analyses are usually customized to the bespoke communication-reduced methods being applied, and therefore are hard to generalize to other methods. Our Contribution. In this paper, we consider decentralized SGD-based optimization in the simpler, but harder to analyze, asynchronous gossip model [Xiao and Boyd, 2004], in which communication occurs in discrete, randomly chosen pairings among nodes, and does not require a common notion of time. We prove that a new variant of SGD we call SwarmSGD converges in this setting, even though it supports all three communication-reduction approaches mentioned above in conjunction. Our analysis generalizes to heterogeneous data distributions and communication topologies. At a high level, SwarmSGD works as follows. Each node i maintains a local model estimate Xi based on which gradients are generated, and a shared buffer where quantized models are stored for communication with other nodes. In each step, node i first computes a sequence of H local gradient steps, which it does not yet apply. Next, the node chooses communication partner j, uniformly at random among its neighbors. Then, node i reads from its own communication buffer and from the communication buffer of j, obtaining quantized models Qi and Qj . A subtlety here is that Qi is not necessarily the quantized version of the model Xi, since other nodes can write concurrently to i’s buffer. The node i then averages Qi with Qj , and updates the neighbor’s remote buffer to the quantized average. Finally, it applies its local gradient steps to the resulting average, adopts this as its next model Xi, and a writes quantized version of it in its own shared buffer. This procedure can be implemented in a deadlock-free, non-blocking manner, by using either shared-memory or the remote direct-memory access (RDMA) calls supported by MPI [Woodall et al., 2006]. Importantly, the communication partner j does not need to block its computation during communication, and may be contacted by more than one interaction partner during a single local step, although we do assume that individual reads and writes are performed atomically. A key component of this procedure is the quantization scheme: directly using an unbiased quantizer, e.g. [Alistarh et al., 2017] would destroy convergence guarantees, as the quantization error would be proportional to the model norm, which may not be bounded. Instead, we use a customized variant of the quantization scheme of Davies et al. [2021], whose error depends on the distance between the point being quantized (the model), and an arbitrary reference point, provided as a parameter. We prove that each node can reliably use its own model as a reference point to quantize and de-quantize messages placed in its buffer by other nodes. In turn, this requires care in the analysis. Specifically, the key observation behind our analysis is exactly in showing that the nodes’ local models stay well-enough concentrated around their mean throughout optimization to allow for correct decoding of quantized models, which in turn implies joint convergence by the nodes towards a point of vanishing gradient. This concentration follows via a non-trivial super-martingale argument. If nodes take a constant number of local SGD steps between communication steps, then SwarmSGD has Θ( √ n) speedup to convergence for non2-convex objectives. This matches results from previous work which considered decentralized dynamics but with global synchronization [Lian et al., 2017]. Experimental Validation. We apply SwarmSGD to train deep neural networks on image classification and machine translation (NMT) tasks, deployed on the Piz Daint supercomputer [Piz, 2019]. Experiments confirm the intuition that the average synchronization cost of SwarmSGD per iteration is low: it stays at less than 10% of the batch computation time, and remains constant as we increase the number of nodes. For example, using SwarmSGD, we are able to train a TransformerXL [Vaswani et al., 2017] model on WMT17 (En-Ge) 1.5× faster than a highly-optimized largebatch SGD baseline, and to slightly higher accuracy, without additional hyper-parameter tuning. At the same time, due to the reduced communication frequency, Swarm also improves upon the speed of the previous practical decentralized methods, e.g. [Lian et al., 2017, 2018, Assran et al., 2018]. Importantly, we also note that, in less overparametrized settings such as training residual CNNs [He et al., 2016] on ImageNet [Russakovsky et al., 2015], nodes do need to perform more iterations over the dataset relative to the baseline in order to recover full accuracy. This is predicted by the analysis, and confirms similar findings in previous work [Assran et al., 2018]. Overall, our method does appear well-suited to training large modern models at node counts where global synchronization among all nodes is prohibitively expensive. Related Work. Decentralized optimization has a long history [Tsitsiklis, 1984], and is related to the study of gossip algorithms, e.g. [Kempe et al., 2003, Xiao and Boyd, 2004, Boyd et al., 2006]. Gossip is usually studied in one of two models [Boyd et al., 2006]: synchronous, structured in global rounds, where each node interacts with a randomly chosen neighbor, forming a matching, and asynchronous, where each node wakes up at random times, e.g. given by a Poisson clock, and picks a random neighbor to interact with. Several classic optimization algorithms have been analyzed in the asynchronous gossip model [Nedic and Ozdaglar, 2009, Johansson et al., 2009, Shamir and Srebro, 2014]. In this paper, we focus on analyzing decentralized SGD in this model. As mentioned, the growing line of work on decentralized optimization for machine learning has mostly focused on variants of the synchronous gossip model. Specifically, Lian et al. [2017] considered this setting in the context of DNN training, while and Tang et al. [2018] and Koloskova et al. [2019b] also analyzed decentralized optimization with quantization in the synchronous model. Wang and Joshi [2018] and Koloskova et al. [2020] provided analysis frameworks for synchronous decentralized SGD with local updates, and possibly changing topologies. Lian et al. [2018] and Assran et al. [2018] focused specifically on reducing synchronization costs in this setting, and proposed algorithms with partially non-blocking communication, in which nodes may read a stale version of the interaction partner’s information, modelling e.g. a communication buffer. However, the maximum staleness must be bounded by a global variable τ , which must be enforced throughout the execution. As observed by Assran et al. [2018], enforcing this bound can cause blocking, and therefore the authors of these works propose to implement a relaxed roundbased model, in which nodes interact once per round in perfect matchings. Their algorithms provide O(1/ √ Tn) convergence rates, under analytical assumptions. Upon careful examination, we find that their analysis approach can be extended to the asynchronous gossip model we consider, by defining the “contact matrices” to correspond to pairwise interactions. However, this introduces two significant limitations. First, the analysis will not support local gradient updates to models nor quantized communication. If we remove these practical relaxations, our technique yields better bounds, as our potential analysis is specifically-tailored to this dynamic interaction model. Second, as we detail in the Appendix, some of their technical conditions imply existence of global synchronization. For Assran et al. [2018], as we detail in the Appendix, their analysis would not guarantee any non-trivial speedup due to parallelization in the asynchronous gossip model. We describe these issues in detail and present a systematic comparison in Appendix A. Lu and De Sa [2020] provided a novel approach to analyze decentralized SGD with quantization and limited asynchrony: specifically, their algorithm requires blocking communication, i.e. nodes have to synchronize explicitly during interactions, but may see old versions of eachothers’ models. More precisely, during each interaction, both parties are responsible for updating their local models, meaning that once node is woken up (we call it initiator node) and chooses interaction partner it has to block until the partner is woken up as well. In our case, initiator can update both its local model and the local model of its partner and proceed to the next step without blocking. Koloskova et al. [2019a] use a similar update rule in the synchronous model. Zhang and You [2021] recently proposed a decentralized algorithm which is fully-asynchronous as long as node activation rates and message delays are bounded. As noted earlier, bounding activation rates does imply blocking; however, tolerating (bounded) message delays does improve over our approach of updating models using atomic writes. The setting further differs in that they assume that nodes compute full (nonstochastic) gradients, as well as that the loss function satisfies the PL condition. In sum, we are the first to explicitly consider the asynchronous gossip model, and the impact of local updates, asynchrony, and quantization used in conjunction together with decentralized SGD. Our technique is new, relies on a fine-grained analysis of individual interactions, and can yield improved bounds even in the case where H = 1. Further, our algorithm is the first to allow for both communication-compression and non-blocking communication. From the implementation perspective, the performance of our algorithm matches or improves that of previous methods, notably D-PSGD [Lian et al., 2017], AD-PSGD [Lian et al., 2018] and SGP [Assran et al., 2018]. 2 Preliminaries The Distributed System Model. We consider a model which consists of n ≥ 2 nodes, each of which is able to perform local computation. We assume that communication network of nodes is a graph G with spectral gap λ2, which denotes the second smallest eigenvalue of the Laplacian of G. Let ρmax, ρmin be the maximum and minimum degrees in G, respectively. We will focus on densely-connected topologies, which model supercomputing and cloud networks: for instance, the standard Dragonfly topology [Kim et al., 2008, Besta and Hoefler, 2014] is regular, densely connected and low-diameter, mimicking regular expanders. The execution is modelled as occurring in discrete steps, where in each step a new node (the “initiator”) is sampled, and can then contact one of its neighbors (the “responder”) uniformly at random. (At the algorithm level, the initiator is “sampled” once it completes its current computational step, and seeks to interact with a neighbor.) We denote the number of steps for which we run by T . Globally, the communication steps can be seen as a sequence of sampled directed communication edges. Thus, the basic unit of time is a single pairwise interaction between two nodes. Notice however that in a real system Θ(n) of these interactions could occur in parallel. Thus, the standard global time measure is parallel time, defined as the total number of interactions divided by n, the number of nodes. Parallel time intuitively corresponds to the average number of interactions per node until convergence. This model is identical to the asynchronous gossip model [Xiao and Boyd, 2004], and to the population protocol model [Angluin et al., 2006]. Stochastic Optimization. We assume that the agents wish to jointly minimize a d-dimensional, differentiable function f : Rd → R. Specifically, we will assume the empirical risk minimization setting, in which agents are given access to a set of m data samples S = {s1, . . . , sm} coming from some underlying distribution D, and to functions `i : Rd → R which encode the loss of the argument at the sample si. The goal of the agents is to converge on a model x∗ which minimizes the empirical loss over the m samples, that is x∗ = argminxf(x) = argminx(1/m) ∑m i=1 `i(x). We assume that each agent i has a local function fi associated to its fraction of the data, i.e ∀x ∈ Rd: f(x) = ∑n i=1 fi(x)/n. Agents employ these samples to run a decentralized variant of SGD, described in detail in the next section. For this, we will assume that each agent i has access to unbiased stochastic gradients g̃i of the function fi, which are functions such that E[g̃i(x)] = ∇fi(x). Stochastic gradients can be computed by each agent by sampling i.i.d. the distribution D, and computing the gradient of f at θ with respect to that sample. Our analysis also extends to the case where each agent is sampling from its own partition of data. We assume the following conditions about the objective function, although not all our results require the second moment bound: 1. Smooth Gradients: The gradient ∇fi(x) is L-Lipschitz continuous for some L > 0, i.e. for all x, y ∈ Rd and agent i: ‖∇fi(x)−∇fi(y)‖ ≤ L‖x− y‖. (1) 2. Bounded Variance: The variance of the stochastic gradients is bounded by some σ2 > 0, i.e. for all x ∈ Rd and agent i: E ∥∥∥g̃i (x)−∇fi (x)∥∥∥2 ≤ σ2. (2) 3. Bounded Local Function Variance: There exists ς2 > 0, such that for all x ∈ Rd: n∑ i=1 ∥∥∥∇f (x)−∇fi (x)∥∥∥2 n ≤ ς2. (3) 4. Bounded Second Moment: The second moment of the stochastic gradients is bounded by some M2 > 0, i.e. for all x ∈ Rd and agent i: E ∥∥∥g̃i (x)∥∥∥2 ≤M2. (4) Note that throughout this paper for any random variable X , by E‖X‖2 we mean E[‖X‖2]. Each node has a communication buffer, which, for simplicity, we assume can be read and written atomically by each node; Importantly, buffers can only hold quantized quantized vectors. Quantization Procedure. We use a quantization function which follows from Lemma 23 in (the full version of) Davies et al. [2021]. Corollary 2.1. (Quantization for Communication Buffers) Fix parameters R and > 0. There exists a quantization procedure defined by an encoding function EncR, : Rd → {0, 1}∗ and a decoding function DecR, = Rd × {0, 1}∗ → Rd such that, for any vector x ∈ Rd which we are trying to quantize, and any vector y which is used by decoding, which we call the decoding key, if ‖x − y‖ ≤ RRd then with probability at least 1 − log log(‖x−y‖ )O(R −d), the function QR, (x) = DecR, (y,EncR, (x)) has the following properties: 1. (Unbiased decoding) E[QR, (x)] = E[DecR, (y,EncR, (x))] = x; 2. (Error bound) ‖QR, (x)− x‖ ≤ (R2 + 7) ; 3. (Communication bound) To compute DecR, (y,EncR, (x)), only the first B bits of EncR, (x) are needed, where B = O ( d log(R ‖x− y‖ ) ). Proof. Lemma 23 of the full version of Davies et al. [2021] provides similar guarantees as the ones we want to prove, but they assume interactive message-passing communication between an encoding node u and a decoding node v. However, in their setting, the messages sent by u are non-adaptive: u simply sends quantizations using an increasing number of bits, until v replies confirming that it has decoded successfully. The number of bits sent during communication is upper bounded by O ( d log(R ‖x− y‖ ) ), where x is a vector node u is sending and y is vector node v is using for decoding. In our setting, we use communication buffers which, so node u can simply append all of its potential messages together as QR, (x). Critically, notice that node u should append enough bits so that the decoding is possible (Since in our setting there is no way for v to acknowledge that it received enough number of bits). This can be done in two ways. If u knows the distance between x and y. then u can simply write O ( d log(R ‖x− y‖ ) bits in the register. In the second case, u does not know the distance. Let T be the total number of times nodes communicate throughout our algorithm. We will show that with high probability all distances between encoded and decoding vectors will be at most T 17 R (dependence on T stems from the fact that we wish to show an upper bound with high probability, please see Lemma B.19 in the Appendix), and therefore at most O(d log T ) bits for quantization will suffice in the worst case. Thus, the node writes O(d log T ) bits in the register , but when v tries to decode, it does not need all those bits: it reads and uses only the first O ( log(R ‖x− y‖ ) bits. Counting Communication Cost. We emphasize that, when we calculate the number of bits needed by quantization we actually aim to measure the number of bits exchanged between u and v. In the setting we consider, which has local registers/communication buffers, this is the number of bits spent to read from (or to write to) the non-local register. Since the second case above involves writing a relatively large number of bits, we will use it only when u is writing a quantized value to its own register/buffer, and so does not need to communicate the bits. Then, only the O ( log(R ‖x− y‖ ) bits read by v need to be communicated. To summarize, in our algorithm we will always ensure that whenever some node uwrites a quantized value, it either knows the key which will be used for decoding it, or is writing to its local register. 3 The SwarmSGD Algorithm We now describe a decentralized variant of SGD, designed to be executed by a population of n nodes, interacting over the edges of communication graph G, as described above. The algorithm proceeds in individual communication steps, where in each step a node which has completed its local computation, seeks a random neighbor to communicate with. We will alternatively say that node gets activated (once it finished computation) and then becomes initiator of the interaction. The Communication Registers. The communication buffer of each node i consists of two registers: one containing an encoded version of its own (possibly outdated) model, which will only be written to by node i itself, and one for holding an encoded version of its current model, which will only be written to by other nodes. (This second register can be seen as a “communication queue” for the nodes wishing to communicate with i.) Initially all registers contain zero vectors. Parallel Execution. For simplicity we will skip the details of quantization, and assume that nodes write and read quantized models directly, without encoding and decoding steps. Both current and outdated models are zero vectors initially. Each node i computes a number of local gradients based on its last model view, and other nodes may update this model while i is computing. Hence, only after node i is done with computing local gradients does it read its updated current model. Let X̂i be the value of the outdated model and let Xi be the value of the current model. Node i computes the average of quantized models Q(Xi)+Q(Xj)2 and writes it in a register which contains current model of node j. Next, it computes Q(Xi)+Q(Xj)2 − ηh̃(X̂i) (where η is a learning rate and h̃(X̂i) is a sum of local gradients), and writes it in both of its local registers, one containing the current model and one containing the outdated model. Once the write is finished, it again proceeds to compute local gradients, based on the view Q(Xi)+Q(Xj)2 − ηh̃(X̂ i t). Sequential model. For the analysis, it is useful to map these parallel interactions to a sorted sequence of sequential ones. Thus, time steps track the interactions between agents, and each interaction consists of random number of local steps steps which activated node performs, plus one averaging step where activated node (or initiator node) contacts its random neighbour. The analysis assumes that nodes get activated randomly, by independent Poisson clocks, which leads to a uniform global sampling distribution. In practice, this could be approximated by having the number of local gradient steps executed by each node be a geometric random variable of mean H . For the sake of practicality, our experiments will take H to be a small constant, instead of a random variable, which yields similar results. The pseudocode from the point of view of a single node i which was activated at step t+ 1 is given in Algorithm 1. For t ≥ 0, let Enc(X̂it) and Enc(Xit) be the values written in the registers containing the outdated and the current model of agent i after t steps, respectively. That is, Xit is the current model of agent i and X̂it is the outdated model. The Communication Procedure. Since i was activated at step t we will assume that it has already computedHi local gradients using the outdated model X̂it , whereHi is a geometric random variable with mean H , as follows. Let h̃0i (X̂ i t) = 0 d; for indices 1 ≤ q ≤ Hi, let h̃qi (X̂it) = g̃i(X̂it −∑q−1 s=0 ηh̃ s i (X̂ i t)) be the q-th local gradient. Then, let h̃i(X̂ i t) = ∑Hi q=1 h̃ q i (X̂ i t) be the sum of all computed local gradients. Or alternatively, since we are in a sequential setting, we can assume that i does computation at step t. First, i retrieves Q(Xit) (the quantized version of its current model), by decoding Enc(Xit) using key Q(X̂ i t). We would like to note that i can obtain Q(X̂ i t) simply by decoding Enc(X̂it), using key X̂ i t (which it knows, to full precision, since it calculated the value itself), and this step does not cost any communication bits since all of the terms involved are local to i’s registers. Then, it contacts its interaction partner j. Node i calculates Q(X̂jt ) by decoding Enc(X̂ j t ), again using X̂it as a key, and then it retrieves Q(X j t ) by decoding Enc(X j t ) with key Q(X̂ j t ). Then, i calculates Xit+1 = Q(Xit) 2 + Q(Xjt ) 2 − ηh̃i(X̂ i t) and X j t+1 = Q(Xjt ) 2 + Q(Xit) 2 . Next, node i calculates Enc(Xit+1) and writes to its own register for its outdated models. Here, we use the first case for quantization using Corollary 2.1: i is not aware of the key that other nodes will use for decoding, but since it is writing to its own local register, it can afford to use the worst-case O(d log T ) bits. Additionally, it writes Enc(Xit+1) to its own register containing current model, so that there are enough bits for Q(X̂it+1). (Note that X̂ i t+1 = X i t+1 has to be used as decoding key.) Finally, it calculates Enc(Xjt+1) and writes it in the register which contains the current model of j, using enough bits that it can be decoded using Q(X̂jt+1) (we have that X̂ j t+1 = X̂ j t ) . Notice that, the way our algorithm is specified, every node which tries to decode Enc(Xjt+1) will use Q(X̂ j t+1) as a key (which i knows), hence Corollary 2.1 holds in this case as well. We emphasize the fact that all this communication is one-way, as it does not require j’s intervention. By Corollary 2.1 the total number of bits used is : O ( d log( R ‖X̂it − X̂ j t ‖) ) +O ( d log( R ‖Q(X̂jt )−X j t ‖) ) +O ( d log( R ‖Q(X̂jt )−X j t+1‖) ) . (Recall that we count only reading and writing to other registers, and do not count operations i performs on its own registers.) We will show that we can make the probability of any instance of quantization failing less than 1/T c, for some sufficiently large constant c, by setting the constant factor in the number of bits sufficiently high. Then, we can take a union bound over all instances of quantization throughout the algorithm, to show that none fail with high probability in T . Henceforth, we will then be able to prove the convergence of our algorithm conditioned on this event. Avoiding race conditions. An interesting question is what happens when multiple nodes contact j concurrently. For conciseness, our pseudocode assumes that the update sequence in lines 8– Algorithm 1 Sequential SwarmSGD pseudocode for each interaction between nodes i and j. 1: % Let G be a communication graph. 2: % Initial models X10 = X20 = ... = Xn0 3: for t = 0 to T − 1 do 4: Sample the initiator node i uniformly at random. 5: Node i samples a node j, adjacent to it in G, uniformly at random. 6: Let t− τ it be the last step at which node i was chosen as initiator. 7: Let X̂it = Xit−τit be its model from that step. 8: Q(Xit)← Dec(Q(X̂it), Enc(Xit)) 9: Q(X̂jt )← Dec(X̂it , Enc(X̂ j t )) 10: Q(Xjt )← Dec(Q(X̂ j t ), Enc(X j t )) 11: Xit+1 ← Q(Xit)/2 +Q(Xjt )/2− ηh̃i(X̂it−1) 12: Xjt+1 ← Q(Xit)/2 +Q(X j t )/2 13: Write Enc(Xit+1) to the registers containing current and outdated models of node i 14: Write Enc(Xjt+1) to the register containing current model of node j 15: For k 6= i, j, Xkt+1 = Xkt . 16: end for 14 happens atomically, but this sequence can cause a data race. To mitigate this, we can use a bounded non-blocking queue [Michael and Scott, 1996] at each node instead of a single buffer. Thus, instead of updating the buffer value atomically, each node simply appends the corresponding quantized model mean to j’s communication queue. In practice, this queue is extremely unlikely to be contended, since communication collisions are rare. 4 The Convergence of SwarmSGD Let µt = ∑n i=1X i t/n be the mean over node models at time t. Our main result is the following: Theorem 4.1. Assume the total number of steps T ≥ 10n, learning rate η = n/ √ T , quantization parameters R = 2 + T 3 d and = ηHM(R2+7) . Then, with probability at least 1 − O( 1 T ) we have that Algorithm 1 converges at rate 1 T T−1∑ t=0 E‖∇f(µt)‖2 ≤ 2(f(µ0)− f(x∗)) H √ T + 6(σ2 + 6Hς2)√ T + 12HM2√ T + C n2ρ3maxH 2L2M2 Tρminλ22 , for constant C, and uses O ( d log ( ρ2max ρminλ2 ) + log T ) expected communication bits per step. Discussion. First, this notion of convergence is standard in the non-convex case [Lian et al., 2015, 2017, 2018], and each of the upper bound terms has an intuitive interpretation: the first represents the reduction in loss relative to the initialization, and gets divided by the number of local steps H , since progress is made in this term in every local step; the second represents the noise due to stochasticity, and is naturally linear in H , as H steps are taken in expectation between two interactions. (Recall that in our model T is the number of interactions, and TH is the expected number of gradient steps.) The fourth term encodes overheads caused by local steps, quantization, and graph structure; however, it is usually seen as negligible (cf. [Lu and De Sa, 2020]), due to division by T . The third term is the critical one, as it implies a dependence on the second-moment bound. Intuitively, this term appears because our algorithm combines both non-blocking communication, and quantization: first, unlike prior work, we do not assume an explicit delay upper bound τ on communication; in conjunction with quantization, the unbounded delay this implies that our estimate on the model average µt may become dependent on M for large delays, which causes this dependency. While this limitation appears inherent, we are able to remove it if we eliminate quantization: in this case, we get a negligible dependency on M . We formalize this in Corollary 4.2. Second, if we focus on the total number of steps to reach some error bound, we notice an interesting trade-off between the linear reduction in H in the first term, due to local steps, and the linear increase in H in the other terms. Notice that, for dense and low-diameter graphs, such as the regular expanders popular in cluster networks, our convergence bound has no dependence in the graph parameters, and communication is linear in d. However, one limitation is that we could have a log n dependency in the communication for highly-irregular and poorly-connected graphs. Finally, note that time T here counts total interactions. However, Θ(n) pairwise interactions occur independently in parallel, and so we can slightly abuse notation and replace T by nT in the above formula, to obtain optimal Θ( √ n) speedup in terms of wall-clock time. Yet, this speedup is dampened by the variance due to noisy local gradient steps, a fact which we will revisit in the experimental section. Proof Overview. At a high level, the argument rests on two technical ideas. The first is that, in spite of noise and local steps, the nodes’ parameters remain concentrated around the mean µt. The second is to leverage this, and bound the impact of stochastic noise and model staleness on convergence. In particular, the main technical difficulty in the proof is to correctly “encode” the fact that parameters are well concentrated around the mean. A natural approach is to bound the model variance Γt after t interactions. Formally, we define Γt = ∑n i=1 ‖Xit − µt‖2, where µt = ∑n i=1X i t/n, as before. We bound the expected evolution of Γt over time, depending on the learning rate, number of local steps, quantization parameter and the bound provided by the assumption on the stochastic gradients (the bound M2). The critical point is that the upper bound on the expectation of Γt does not depend on the number of interactions t. More precisely, if all the above hyper-parameters are constant, we get that E[Γ(t)] = O(n). Our approach brings over tools from classic load-balancing [Berenbrink et al., 2009], to the multi-dimensional case. Three key elements of novelty in our case are that (1) for us the load balancing process is dynamic, in the sense that new loads, i.e. gradients, get continually added; (2) the load-balancing process we consider is multi-dimensional, whereas usually the literature considers simple scalar weights; (3) the models can be outdated and quantized, which leads to a complex, noisy load-balancing process. We resolve the this third and most challenging issue by using carefully-defined auxiliary potentials. Removing the Second-Moment Bound. Upon reflection, we notice that can render the dependency on M2 negligible if we do not use quantization, but otherwise keep the algorithm the same: Corollary 4.2. Given the previous assumptions and learning rate η = n/ √ T , for some constant C, we have that the Algorithm 1 where quantization is the identity converges at rate 1 T T−1∑ t=0 E‖∇f(µt)‖2 ≤ 2(f(µ0)− f(x∗)) H √ T + 6(σ2 + 6Hς2)√ T + Cn2ρ3maxH 2L2M2 Tρminλ22 . Notice that in this case all the term containing the second moment bound M2 is dampened by a factor of 1T , hence we can assume that Algorithm 1 converges at close-to optimal rate O ( 2(f(µ0)−f(x∗)) H √ T + 6H(σ 2+6ς2)√ T ) . This result still improves upon previous analyses [Lian et al., 2018, Assran et al., 2018, Lu and De Sa, 2020] in the sense that communication is completely nonblocking (there is no τ ), and we allow for local steps. Further, in the absence of quantization and assuming that the nodes perform single local gradient step, we can entirely remove assumption (4), when T is large enough (e.g for the fully connected graph we will need T ≥ Ω(n3)). More precisely, we can attain convergence rate ofO ( f(µ0)−f(x∗)√ T + (σ2+ς2)√ T + n3ρ4maxL 2(σ2+ς2) Tρminλ32 ) .We leave the proof of this last extension to the full version of this work. 5 Experimental Results In this section, we validate our analysis, by applying the algorithm to training deep neural networks for image classification and machine translation. We map the algorithm onto a multi-node supercomputing setting, in which we have a large number of compute nodes, connected by fast communication links. The key overhead in this setting is synchronization: at large node counts, the cost of synchronizing all nodes so they execute in lock-step can be very high, see e.g. [Li et al., 2019] for numerical results on different workloads. Transmission cost also becomes significant at large node counts and large model sizes. Decentralized methods can mitigate this overhead, since nodes synchronize only sporadically and in pairs. Target System and Implementation. We run SwarmSGD on the CSCS Piz Daint supercomputer, which is composed of Cray XC50 nodes, each with a Xeon E5-2690v3 CPU and an NVIDIA Tesla P100 GPU, using a state-of-the-art Aries interconnect over a Dragonfly network topology, which is regular. Please see Piz [2019] for more details. We implemented SwarmSGD in Pytorch and TensorFlow using MPI-based primitives, with non-blocking averaging. The Pytorch implementation is on top of SGP framework [Assran et al., 2018], and uses SwarmSGD to train ResNets on the CIFAR- 10/100 Krizhevsky et al. [2014] and ImageNet [Russakovsky et al., 2015] datasets, while we use the TensorFlow implementation to train the original version of the Transformer-XL model [Vaswani et al., 2017] on the WMT17 (En-Ge) dataset. All algorithms use the same topology overlay, which is fully-connected: according to their theory and experiments, this well-connected overlay should maximize convergence speed. SGP was run with overlap factor 1, following Assran et al. [2018]. Training Process. Our training methodology follows data-parallel training, with some differences due to decentralization, and is identical to previous work on decentralized and local SGD, e.g. Lian et al. [2017], Assran et al. [2018], Lin et al. [2018]. Training proceeds in epochs, each of which corresponds to processes collectively performing a full pass over the dataset. At the beginning of each epoch, we re-shuffle the dataset and partition it among processes [Lin et al., 2018]. As noted in previous work [Lian et al., 2017, 2018, Assran et al., 2018] variants of decentralized SGD are not always able to recover sequential SGD accuracy within the same number of epochs as this baseline. This is justified theoretically, as the slower mixing can affect convergence, but also intuitively, as each model sees significantly fewer updates per epoch. Thus, we will allow the decentralized schemes to execute for more epochs, by a constant multiplier factor between 1 and 3. To reduce multipliers, we experimented with SlowMo [Wang et al., 2019]; we found that it improved results across methods on CIFAR-10, but not at ImageNet scale; therefore, the provided results do not include it. Once we have fixed the number of epochs, we do not alter the other training hyperparameters: in particular, the learning rate schedule, momentum and weight decay terms are identical to the standard values for sequential SGD, for each individual model. Accuracy and Speed. We first examined whether SwarmSGD can in fact recover full accuracy versus the sequential or large-batch SGD baselines. In Table 1 we provide an overview of parameter values to recover large-batch SGD accuracy (following Goyal et al. [2017]) using SwarmSGD, on the ResNet, ImageNet and CIFAR tasks. We execute for 32 nodes on ImageNet, and 8 nodes on CIFAR-10. (Local batch sizes are 128 for ResNet20 and ResNet50, and 128 for ResNet18. Quantization is not applied in these experiments.) The results show that Swarm can recover or slightly exceed the accuracy of the large-batch baselines, and that it has lower practical communication cost relative to existing methods (see Figure 2(b), where we separate the average computation cost per batch). However, Swarm requires significant additional passes over the data (up to 2.7×) to achieve full accuracy, which negates its performance benefits in this specific setting, relative to large-batch SGD. (Please see the Supplementary for end-to-end time comparisons.) This partly negative finding is in line with previous work on decentralized methods [Assran et al., 2018]. Next, we examine accuracy for the WMT17 task. The results are provided in Figure 1(a), in accuracy-vs-time format, for 16 and 32 nodes, executing for 10 global epochs. Here, the large-batch SGD (LB-SGD) baseline (BLEU score 26.1 at 16 nodes) is a poor alternative at high node counts due to model size: its throughput is low, and drops catastrophically at 64 nodes due to the network becoming severely bandwdith-bottlenecked (see Figure 1(b)). At 16 nodes, Swarm slightly exceeds the baseline accuracy at 26.17 BLEU, for an end-to-end speedup of ∼ 1.5×. In the same setting, Swarm outperforms all other decentralized methods (the fastest previous method, AD-PSGD, is 30% slower, and less accurate), both in terms of BLEU score, and in terms of end-to-end time.(The objective loss graph is similar, and is provided in the Appendix). At 32 nodes, all decentralized methods reach lower scores (∼ 23.5) after 10 epochs. However, we observed experimentally that running Swarm for an additional 5 epochs (multiplier 1.5) at 32 nodes recovered a BLEU score of ∼ 25.72, which is 30% faster than the 16-node version in terms of end-to-end time (omitted for visibility). In addition, we investigated 1) the accuracy of the real average of all models throughout training: it is usually more accurate than an arbitrary model, but not significantly so, corroborating the claim that individual models tend to stay close to the mean; 2) the influence of the number of local steps on accuracy: perhaps surprisingly, we were able to recover baseline accuracy on ResNet18/ImageNet for up to 4 local steps (see Figure 2(a)); 3) the impact of quantization on convergence, where we were able to recover accuracy when applying 8-bit model quantization to Swarm. We encourage the reader to examine the full experimental report in the Appendix, which contains data on these experiments, as well as additional ablation studies. Discussion. Generally, the performance of SwarmSGD appears to be slighly superior to previous decentralized methods (see Figure 1 for an illustration, and Figure 2(b) for a performance breakdown). We investigated this advantage, and found that the per-step communication cost of Swarm, without quantization, is similar to AD-PSGD; however, our algorithm benefits from the reduction in communication frequency: nodes communicate at least 2x less often, and therefore incur lower average communication cost. In particular, a closer examination of the average batch times in Figure 2(b) shows that time per node per batch (including communication and computation) is largely constant as we increase the number of nodes, which suggests good scaling behaviour. The main disadvantage of Swarm is that, similar to previous decentralized methods, it may need additional data passes in order to fully recover accuracy at high node counts. However, we also note that our method did not benefit from the high level of hyperparameter tuning applied to large-batch SGD, e.g. Goyal et al. [2017]. We find it interesting that this accuracy issue is less prevalent in the context of large, over-parameterized models, such as the Transformer, where Swarm can be a viable alternative to large-batch SGD within the same number of epochs. 6 Conclusions and Future Work We analyzed the convergence of SGD in an extremely decoupled model of distributed computing, in which nodes mostly perform independent SGD updates, interspersed with intermittent pairwise averaging steps, which may be performed in an inconsistent and noisy manner. We showed that SGD still converges in this restrictive setting, even under consistency relaxations. Empirical results complement our analysis, showing that this method can outperform previous decentralized algorithms, and can even be competitive against large-batch SGD for very large models. A natural extension would be to generalize the bounds to arbitrary communication graphs. From the practical perspective, one extension would be to reduce the additional training epochs, and to experiment on large-scale decentralized testbeds. Acknowledgments and Disclosure of Funding We gratefully acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML). PD partly conducted this work while at IST Austria and was supported by the European Union’s Horizon 2020 programme under the Marie Skłodowska-Curie grant agreement No. 754411. SL was funded in part by European Research Council (ERC) under the European Union’s Horizon 2020 programme (grant agreement DAPP, No. 678880, and EPiGRAM-HS, No. 801039).
1. What is the focus and contribution of the paper on decentralized optimization for distributed deep learning? 2. How does the reviewer assess the originality and significance of the proposed method? 3. What are the concerns regarding the quality of the empirical and theoretical contributions? 4. Do you have any questions about the assumptions made in the analysis, such as the bounded second moment assumption and the Poisson point process assumption? 5. How does the reviewer evaluate the performance of the proposed method in practice, particularly in comparison with other gossip-based methods?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a decentralized optimization method, SwarmSGD, for reducing the communication overhead in distributed deep learning. The proposed method combines asynchronous pair-wise gossip, quantized communication, and local updates into a single method and a unified analysis. Convergence guarantees are provided for smooth non-convex function, using the quantizer from Davies et al., 2021, and a bounded second moment assumption. Numerical results are provided on image classification (CIFAR, ImageNet) and machine translation (WMT En-Ge) tasks. Review Originality and Significance This paper mostly combines many related ideas from the literature (asynchronous pair-wise gossip, quantized communication, local updates) into a single method and so originality is limited. However, there is significance in the work since it is necessary for all these orthogonal communication reduction techniques to eventually be integrated together. In this sense, this paper fits nicely in the literature. Quality I only have a couple of concerns about the quality of the empirical and theoretical contribution, and would appreciate if the authors could address these in their rebuttal. The assumptions made consist of Static communication graph L-smooth Bounded variance Bounded second moment Poisson point process for asynchronous activation of nodes To the best of my knowledge, the analysis applies standard tools from the literature, e.g., the supermartingale-esque iteration, used heavily in the analysis of subgradient push methods, as well as perron-frobenius for consensus. However, in my opinion the bounded second momentum assumption is a non-trivial limitation of the theoretical contribution. It may be ok in a quantized setting, but this assumption is still used in Corollary 4.2 after removing the quantization effects. The Poisson point process assumption is fine for the theoretical assumptions, but then it is approximated in practice by setting the number of local gradient steps for each node to be a geometric random variable, which I would expect to impractically degrade performance in practice, since more frequent asynchronous non-blocking communication is likely to result in better convergence. The baseline numbers for SwarmSGD seem good and consistent with the literature. Specifically, the proposed method looks to recover standard numbers for the chosen benchmarks. However the timing comparison with other gossip-based methods looks highly misleading. For example, it seems odd to use a fully-connected topology for the comparison of general gossip-based methods like SGP and D-PSGD with the pair-wise methods like SwarmSGD. Especially since the whole motivation for gossip-based deep learning methods is to reduce communication overhead by leveraging sparse topologies.
NIPS
Title Asynchronous Decentralized SGD with Quantized and Local Updates Abstract Decentralized optimization is emerging as a viable alternative for scalable distributed machine learning, but also introduces new challenges in terms of synchronization costs. To this end, several communication-reduction techniques, such as non-blocking communication, quantization, and local steps, have been explored in the decentralized setting. Due to the complexity of analyzing optimization in such a relaxed setting, this line of work often assumes global communication rounds, which require additional synchronization. In this paper, we consider decentralized optimization in the simpler, but harder to analyze, asynchronous gossip model, in which communication occurs in discrete, randomly chosen pairings among nodes. Perhaps surprisingly, we show that a variant of SGD called SwarmSGD still converges in this setting, even if non-blocking communication, quantization, and local steps are all applied in conjunction, and even if the node data distributions and underlying graph topology are both heterogenous. Our analysis is based on a new connection with multi-dimensional load-balancing processes. We implement this algorithm and deploy it in a super-computing environment, showing that it can outperform previous decentralized methods in terms of end-to-end training time, and that it can even rival carefully-tuned large-batch SGD for certain tasks. 1 Introduction Decentralized optimization has recently emerged as a promising approach for scaling the distributed training of machine learning models, in particular via stochastic gradient descent (SGD) [Lian et al., 2017, Tang et al., 2018, Koloskova et al., 2019a]. Its key advantage is that it removes the need for a central coordinator node in distributed training, and therefore it can allow for high scaling. The general decentralized optimization setting is the following: we are given n nodes, each with a subset of data from some distribution, which can communicate over some underlying graph topology. In each global round, each node samples some local data, performs a local gradient step, and it is paired with a neighbor, which may be chosen randomly. The nodes exchange model information pairwise, and then update their models, often via direct model averaging. Variants of this setting have been analyzed since pioneering work by Tsitsiklis [1984], for various estimation and optimization algorithms [Xiao and Boyd, 2004, Nedic and Ozdaglar, 2009, Johansson et al., 2009, Shamir and Srebro, 2014] and have seen renewed interest given its applicability to training deep neural networks (DNNs) at scale, e.g. [Lian et al., 2017, 2018, Assran et al., 2018]. Recently, there has been significant focus on reducing the synchronization overheads for decentralized training, usually employing three approaches: 1) implementing faster non-blocking communication between communication partners at a round [Lian et al., 2018, Assran et al., 2018], which may cause them to see stale versions of their models, 2) allowing nodes to take local steps in between 35th Conference on Neural Information Processing Systems (NeurIPS 2021). their communication rounds [Wang and Joshi, 2018, Koloskova et al., 2020], and 3) applying quantization to the communication [Lu and De Sa, 2020, Tang et al., 2018, Koloskova et al., 2019a,b]. The above impressive line of work contributes a rich set of algorithmic and analytic ideas; however, one common limitation is that the algorithms are usually set in the synchronous gossip model, which requires all nodes to perform their communication in lock-step rounds, and share a common notion of time, thus reducing their practicality. To mitigate this fact, some references, e.g. [Lian et al., 2018, Assran et al., 2018, Lu and De Sa, 2020] partially relax this requirement, although they do so at the cost of additional assumptions, or reduced guarantees, as we discuss in related work. Another relative limitation is that the analyses are usually customized to the bespoke communication-reduced methods being applied, and therefore are hard to generalize to other methods. Our Contribution. In this paper, we consider decentralized SGD-based optimization in the simpler, but harder to analyze, asynchronous gossip model [Xiao and Boyd, 2004], in which communication occurs in discrete, randomly chosen pairings among nodes, and does not require a common notion of time. We prove that a new variant of SGD we call SwarmSGD converges in this setting, even though it supports all three communication-reduction approaches mentioned above in conjunction. Our analysis generalizes to heterogeneous data distributions and communication topologies. At a high level, SwarmSGD works as follows. Each node i maintains a local model estimate Xi based on which gradients are generated, and a shared buffer where quantized models are stored for communication with other nodes. In each step, node i first computes a sequence of H local gradient steps, which it does not yet apply. Next, the node chooses communication partner j, uniformly at random among its neighbors. Then, node i reads from its own communication buffer and from the communication buffer of j, obtaining quantized models Qi and Qj . A subtlety here is that Qi is not necessarily the quantized version of the model Xi, since other nodes can write concurrently to i’s buffer. The node i then averages Qi with Qj , and updates the neighbor’s remote buffer to the quantized average. Finally, it applies its local gradient steps to the resulting average, adopts this as its next model Xi, and a writes quantized version of it in its own shared buffer. This procedure can be implemented in a deadlock-free, non-blocking manner, by using either shared-memory or the remote direct-memory access (RDMA) calls supported by MPI [Woodall et al., 2006]. Importantly, the communication partner j does not need to block its computation during communication, and may be contacted by more than one interaction partner during a single local step, although we do assume that individual reads and writes are performed atomically. A key component of this procedure is the quantization scheme: directly using an unbiased quantizer, e.g. [Alistarh et al., 2017] would destroy convergence guarantees, as the quantization error would be proportional to the model norm, which may not be bounded. Instead, we use a customized variant of the quantization scheme of Davies et al. [2021], whose error depends on the distance between the point being quantized (the model), and an arbitrary reference point, provided as a parameter. We prove that each node can reliably use its own model as a reference point to quantize and de-quantize messages placed in its buffer by other nodes. In turn, this requires care in the analysis. Specifically, the key observation behind our analysis is exactly in showing that the nodes’ local models stay well-enough concentrated around their mean throughout optimization to allow for correct decoding of quantized models, which in turn implies joint convergence by the nodes towards a point of vanishing gradient. This concentration follows via a non-trivial super-martingale argument. If nodes take a constant number of local SGD steps between communication steps, then SwarmSGD has Θ( √ n) speedup to convergence for non2-convex objectives. This matches results from previous work which considered decentralized dynamics but with global synchronization [Lian et al., 2017]. Experimental Validation. We apply SwarmSGD to train deep neural networks on image classification and machine translation (NMT) tasks, deployed on the Piz Daint supercomputer [Piz, 2019]. Experiments confirm the intuition that the average synchronization cost of SwarmSGD per iteration is low: it stays at less than 10% of the batch computation time, and remains constant as we increase the number of nodes. For example, using SwarmSGD, we are able to train a TransformerXL [Vaswani et al., 2017] model on WMT17 (En-Ge) 1.5× faster than a highly-optimized largebatch SGD baseline, and to slightly higher accuracy, without additional hyper-parameter tuning. At the same time, due to the reduced communication frequency, Swarm also improves upon the speed of the previous practical decentralized methods, e.g. [Lian et al., 2017, 2018, Assran et al., 2018]. Importantly, we also note that, in less overparametrized settings such as training residual CNNs [He et al., 2016] on ImageNet [Russakovsky et al., 2015], nodes do need to perform more iterations over the dataset relative to the baseline in order to recover full accuracy. This is predicted by the analysis, and confirms similar findings in previous work [Assran et al., 2018]. Overall, our method does appear well-suited to training large modern models at node counts where global synchronization among all nodes is prohibitively expensive. Related Work. Decentralized optimization has a long history [Tsitsiklis, 1984], and is related to the study of gossip algorithms, e.g. [Kempe et al., 2003, Xiao and Boyd, 2004, Boyd et al., 2006]. Gossip is usually studied in one of two models [Boyd et al., 2006]: synchronous, structured in global rounds, where each node interacts with a randomly chosen neighbor, forming a matching, and asynchronous, where each node wakes up at random times, e.g. given by a Poisson clock, and picks a random neighbor to interact with. Several classic optimization algorithms have been analyzed in the asynchronous gossip model [Nedic and Ozdaglar, 2009, Johansson et al., 2009, Shamir and Srebro, 2014]. In this paper, we focus on analyzing decentralized SGD in this model. As mentioned, the growing line of work on decentralized optimization for machine learning has mostly focused on variants of the synchronous gossip model. Specifically, Lian et al. [2017] considered this setting in the context of DNN training, while and Tang et al. [2018] and Koloskova et al. [2019b] also analyzed decentralized optimization with quantization in the synchronous model. Wang and Joshi [2018] and Koloskova et al. [2020] provided analysis frameworks for synchronous decentralized SGD with local updates, and possibly changing topologies. Lian et al. [2018] and Assran et al. [2018] focused specifically on reducing synchronization costs in this setting, and proposed algorithms with partially non-blocking communication, in which nodes may read a stale version of the interaction partner’s information, modelling e.g. a communication buffer. However, the maximum staleness must be bounded by a global variable τ , which must be enforced throughout the execution. As observed by Assran et al. [2018], enforcing this bound can cause blocking, and therefore the authors of these works propose to implement a relaxed roundbased model, in which nodes interact once per round in perfect matchings. Their algorithms provide O(1/ √ Tn) convergence rates, under analytical assumptions. Upon careful examination, we find that their analysis approach can be extended to the asynchronous gossip model we consider, by defining the “contact matrices” to correspond to pairwise interactions. However, this introduces two significant limitations. First, the analysis will not support local gradient updates to models nor quantized communication. If we remove these practical relaxations, our technique yields better bounds, as our potential analysis is specifically-tailored to this dynamic interaction model. Second, as we detail in the Appendix, some of their technical conditions imply existence of global synchronization. For Assran et al. [2018], as we detail in the Appendix, their analysis would not guarantee any non-trivial speedup due to parallelization in the asynchronous gossip model. We describe these issues in detail and present a systematic comparison in Appendix A. Lu and De Sa [2020] provided a novel approach to analyze decentralized SGD with quantization and limited asynchrony: specifically, their algorithm requires blocking communication, i.e. nodes have to synchronize explicitly during interactions, but may see old versions of eachothers’ models. More precisely, during each interaction, both parties are responsible for updating their local models, meaning that once node is woken up (we call it initiator node) and chooses interaction partner it has to block until the partner is woken up as well. In our case, initiator can update both its local model and the local model of its partner and proceed to the next step without blocking. Koloskova et al. [2019a] use a similar update rule in the synchronous model. Zhang and You [2021] recently proposed a decentralized algorithm which is fully-asynchronous as long as node activation rates and message delays are bounded. As noted earlier, bounding activation rates does imply blocking; however, tolerating (bounded) message delays does improve over our approach of updating models using atomic writes. The setting further differs in that they assume that nodes compute full (nonstochastic) gradients, as well as that the loss function satisfies the PL condition. In sum, we are the first to explicitly consider the asynchronous gossip model, and the impact of local updates, asynchrony, and quantization used in conjunction together with decentralized SGD. Our technique is new, relies on a fine-grained analysis of individual interactions, and can yield improved bounds even in the case where H = 1. Further, our algorithm is the first to allow for both communication-compression and non-blocking communication. From the implementation perspective, the performance of our algorithm matches or improves that of previous methods, notably D-PSGD [Lian et al., 2017], AD-PSGD [Lian et al., 2018] and SGP [Assran et al., 2018]. 2 Preliminaries The Distributed System Model. We consider a model which consists of n ≥ 2 nodes, each of which is able to perform local computation. We assume that communication network of nodes is a graph G with spectral gap λ2, which denotes the second smallest eigenvalue of the Laplacian of G. Let ρmax, ρmin be the maximum and minimum degrees in G, respectively. We will focus on densely-connected topologies, which model supercomputing and cloud networks: for instance, the standard Dragonfly topology [Kim et al., 2008, Besta and Hoefler, 2014] is regular, densely connected and low-diameter, mimicking regular expanders. The execution is modelled as occurring in discrete steps, where in each step a new node (the “initiator”) is sampled, and can then contact one of its neighbors (the “responder”) uniformly at random. (At the algorithm level, the initiator is “sampled” once it completes its current computational step, and seeks to interact with a neighbor.) We denote the number of steps for which we run by T . Globally, the communication steps can be seen as a sequence of sampled directed communication edges. Thus, the basic unit of time is a single pairwise interaction between two nodes. Notice however that in a real system Θ(n) of these interactions could occur in parallel. Thus, the standard global time measure is parallel time, defined as the total number of interactions divided by n, the number of nodes. Parallel time intuitively corresponds to the average number of interactions per node until convergence. This model is identical to the asynchronous gossip model [Xiao and Boyd, 2004], and to the population protocol model [Angluin et al., 2006]. Stochastic Optimization. We assume that the agents wish to jointly minimize a d-dimensional, differentiable function f : Rd → R. Specifically, we will assume the empirical risk minimization setting, in which agents are given access to a set of m data samples S = {s1, . . . , sm} coming from some underlying distribution D, and to functions `i : Rd → R which encode the loss of the argument at the sample si. The goal of the agents is to converge on a model x∗ which minimizes the empirical loss over the m samples, that is x∗ = argminxf(x) = argminx(1/m) ∑m i=1 `i(x). We assume that each agent i has a local function fi associated to its fraction of the data, i.e ∀x ∈ Rd: f(x) = ∑n i=1 fi(x)/n. Agents employ these samples to run a decentralized variant of SGD, described in detail in the next section. For this, we will assume that each agent i has access to unbiased stochastic gradients g̃i of the function fi, which are functions such that E[g̃i(x)] = ∇fi(x). Stochastic gradients can be computed by each agent by sampling i.i.d. the distribution D, and computing the gradient of f at θ with respect to that sample. Our analysis also extends to the case where each agent is sampling from its own partition of data. We assume the following conditions about the objective function, although not all our results require the second moment bound: 1. Smooth Gradients: The gradient ∇fi(x) is L-Lipschitz continuous for some L > 0, i.e. for all x, y ∈ Rd and agent i: ‖∇fi(x)−∇fi(y)‖ ≤ L‖x− y‖. (1) 2. Bounded Variance: The variance of the stochastic gradients is bounded by some σ2 > 0, i.e. for all x ∈ Rd and agent i: E ∥∥∥g̃i (x)−∇fi (x)∥∥∥2 ≤ σ2. (2) 3. Bounded Local Function Variance: There exists ς2 > 0, such that for all x ∈ Rd: n∑ i=1 ∥∥∥∇f (x)−∇fi (x)∥∥∥2 n ≤ ς2. (3) 4. Bounded Second Moment: The second moment of the stochastic gradients is bounded by some M2 > 0, i.e. for all x ∈ Rd and agent i: E ∥∥∥g̃i (x)∥∥∥2 ≤M2. (4) Note that throughout this paper for any random variable X , by E‖X‖2 we mean E[‖X‖2]. Each node has a communication buffer, which, for simplicity, we assume can be read and written atomically by each node; Importantly, buffers can only hold quantized quantized vectors. Quantization Procedure. We use a quantization function which follows from Lemma 23 in (the full version of) Davies et al. [2021]. Corollary 2.1. (Quantization for Communication Buffers) Fix parameters R and > 0. There exists a quantization procedure defined by an encoding function EncR, : Rd → {0, 1}∗ and a decoding function DecR, = Rd × {0, 1}∗ → Rd such that, for any vector x ∈ Rd which we are trying to quantize, and any vector y which is used by decoding, which we call the decoding key, if ‖x − y‖ ≤ RRd then with probability at least 1 − log log(‖x−y‖ )O(R −d), the function QR, (x) = DecR, (y,EncR, (x)) has the following properties: 1. (Unbiased decoding) E[QR, (x)] = E[DecR, (y,EncR, (x))] = x; 2. (Error bound) ‖QR, (x)− x‖ ≤ (R2 + 7) ; 3. (Communication bound) To compute DecR, (y,EncR, (x)), only the first B bits of EncR, (x) are needed, where B = O ( d log(R ‖x− y‖ ) ). Proof. Lemma 23 of the full version of Davies et al. [2021] provides similar guarantees as the ones we want to prove, but they assume interactive message-passing communication between an encoding node u and a decoding node v. However, in their setting, the messages sent by u are non-adaptive: u simply sends quantizations using an increasing number of bits, until v replies confirming that it has decoded successfully. The number of bits sent during communication is upper bounded by O ( d log(R ‖x− y‖ ) ), where x is a vector node u is sending and y is vector node v is using for decoding. In our setting, we use communication buffers which, so node u can simply append all of its potential messages together as QR, (x). Critically, notice that node u should append enough bits so that the decoding is possible (Since in our setting there is no way for v to acknowledge that it received enough number of bits). This can be done in two ways. If u knows the distance between x and y. then u can simply write O ( d log(R ‖x− y‖ ) bits in the register. In the second case, u does not know the distance. Let T be the total number of times nodes communicate throughout our algorithm. We will show that with high probability all distances between encoded and decoding vectors will be at most T 17 R (dependence on T stems from the fact that we wish to show an upper bound with high probability, please see Lemma B.19 in the Appendix), and therefore at most O(d log T ) bits for quantization will suffice in the worst case. Thus, the node writes O(d log T ) bits in the register , but when v tries to decode, it does not need all those bits: it reads and uses only the first O ( log(R ‖x− y‖ ) bits. Counting Communication Cost. We emphasize that, when we calculate the number of bits needed by quantization we actually aim to measure the number of bits exchanged between u and v. In the setting we consider, which has local registers/communication buffers, this is the number of bits spent to read from (or to write to) the non-local register. Since the second case above involves writing a relatively large number of bits, we will use it only when u is writing a quantized value to its own register/buffer, and so does not need to communicate the bits. Then, only the O ( log(R ‖x− y‖ ) bits read by v need to be communicated. To summarize, in our algorithm we will always ensure that whenever some node uwrites a quantized value, it either knows the key which will be used for decoding it, or is writing to its local register. 3 The SwarmSGD Algorithm We now describe a decentralized variant of SGD, designed to be executed by a population of n nodes, interacting over the edges of communication graph G, as described above. The algorithm proceeds in individual communication steps, where in each step a node which has completed its local computation, seeks a random neighbor to communicate with. We will alternatively say that node gets activated (once it finished computation) and then becomes initiator of the interaction. The Communication Registers. The communication buffer of each node i consists of two registers: one containing an encoded version of its own (possibly outdated) model, which will only be written to by node i itself, and one for holding an encoded version of its current model, which will only be written to by other nodes. (This second register can be seen as a “communication queue” for the nodes wishing to communicate with i.) Initially all registers contain zero vectors. Parallel Execution. For simplicity we will skip the details of quantization, and assume that nodes write and read quantized models directly, without encoding and decoding steps. Both current and outdated models are zero vectors initially. Each node i computes a number of local gradients based on its last model view, and other nodes may update this model while i is computing. Hence, only after node i is done with computing local gradients does it read its updated current model. Let X̂i be the value of the outdated model and let Xi be the value of the current model. Node i computes the average of quantized models Q(Xi)+Q(Xj)2 and writes it in a register which contains current model of node j. Next, it computes Q(Xi)+Q(Xj)2 − ηh̃(X̂i) (where η is a learning rate and h̃(X̂i) is a sum of local gradients), and writes it in both of its local registers, one containing the current model and one containing the outdated model. Once the write is finished, it again proceeds to compute local gradients, based on the view Q(Xi)+Q(Xj)2 − ηh̃(X̂ i t). Sequential model. For the analysis, it is useful to map these parallel interactions to a sorted sequence of sequential ones. Thus, time steps track the interactions between agents, and each interaction consists of random number of local steps steps which activated node performs, plus one averaging step where activated node (or initiator node) contacts its random neighbour. The analysis assumes that nodes get activated randomly, by independent Poisson clocks, which leads to a uniform global sampling distribution. In practice, this could be approximated by having the number of local gradient steps executed by each node be a geometric random variable of mean H . For the sake of practicality, our experiments will take H to be a small constant, instead of a random variable, which yields similar results. The pseudocode from the point of view of a single node i which was activated at step t+ 1 is given in Algorithm 1. For t ≥ 0, let Enc(X̂it) and Enc(Xit) be the values written in the registers containing the outdated and the current model of agent i after t steps, respectively. That is, Xit is the current model of agent i and X̂it is the outdated model. The Communication Procedure. Since i was activated at step t we will assume that it has already computedHi local gradients using the outdated model X̂it , whereHi is a geometric random variable with mean H , as follows. Let h̃0i (X̂ i t) = 0 d; for indices 1 ≤ q ≤ Hi, let h̃qi (X̂it) = g̃i(X̂it −∑q−1 s=0 ηh̃ s i (X̂ i t)) be the q-th local gradient. Then, let h̃i(X̂ i t) = ∑Hi q=1 h̃ q i (X̂ i t) be the sum of all computed local gradients. Or alternatively, since we are in a sequential setting, we can assume that i does computation at step t. First, i retrieves Q(Xit) (the quantized version of its current model), by decoding Enc(Xit) using key Q(X̂ i t). We would like to note that i can obtain Q(X̂ i t) simply by decoding Enc(X̂it), using key X̂ i t (which it knows, to full precision, since it calculated the value itself), and this step does not cost any communication bits since all of the terms involved are local to i’s registers. Then, it contacts its interaction partner j. Node i calculates Q(X̂jt ) by decoding Enc(X̂ j t ), again using X̂it as a key, and then it retrieves Q(X j t ) by decoding Enc(X j t ) with key Q(X̂ j t ). Then, i calculates Xit+1 = Q(Xit) 2 + Q(Xjt ) 2 − ηh̃i(X̂ i t) and X j t+1 = Q(Xjt ) 2 + Q(Xit) 2 . Next, node i calculates Enc(Xit+1) and writes to its own register for its outdated models. Here, we use the first case for quantization using Corollary 2.1: i is not aware of the key that other nodes will use for decoding, but since it is writing to its own local register, it can afford to use the worst-case O(d log T ) bits. Additionally, it writes Enc(Xit+1) to its own register containing current model, so that there are enough bits for Q(X̂it+1). (Note that X̂ i t+1 = X i t+1 has to be used as decoding key.) Finally, it calculates Enc(Xjt+1) and writes it in the register which contains the current model of j, using enough bits that it can be decoded using Q(X̂jt+1) (we have that X̂ j t+1 = X̂ j t ) . Notice that, the way our algorithm is specified, every node which tries to decode Enc(Xjt+1) will use Q(X̂ j t+1) as a key (which i knows), hence Corollary 2.1 holds in this case as well. We emphasize the fact that all this communication is one-way, as it does not require j’s intervention. By Corollary 2.1 the total number of bits used is : O ( d log( R ‖X̂it − X̂ j t ‖) ) +O ( d log( R ‖Q(X̂jt )−X j t ‖) ) +O ( d log( R ‖Q(X̂jt )−X j t+1‖) ) . (Recall that we count only reading and writing to other registers, and do not count operations i performs on its own registers.) We will show that we can make the probability of any instance of quantization failing less than 1/T c, for some sufficiently large constant c, by setting the constant factor in the number of bits sufficiently high. Then, we can take a union bound over all instances of quantization throughout the algorithm, to show that none fail with high probability in T . Henceforth, we will then be able to prove the convergence of our algorithm conditioned on this event. Avoiding race conditions. An interesting question is what happens when multiple nodes contact j concurrently. For conciseness, our pseudocode assumes that the update sequence in lines 8– Algorithm 1 Sequential SwarmSGD pseudocode for each interaction between nodes i and j. 1: % Let G be a communication graph. 2: % Initial models X10 = X20 = ... = Xn0 3: for t = 0 to T − 1 do 4: Sample the initiator node i uniformly at random. 5: Node i samples a node j, adjacent to it in G, uniformly at random. 6: Let t− τ it be the last step at which node i was chosen as initiator. 7: Let X̂it = Xit−τit be its model from that step. 8: Q(Xit)← Dec(Q(X̂it), Enc(Xit)) 9: Q(X̂jt )← Dec(X̂it , Enc(X̂ j t )) 10: Q(Xjt )← Dec(Q(X̂ j t ), Enc(X j t )) 11: Xit+1 ← Q(Xit)/2 +Q(Xjt )/2− ηh̃i(X̂it−1) 12: Xjt+1 ← Q(Xit)/2 +Q(X j t )/2 13: Write Enc(Xit+1) to the registers containing current and outdated models of node i 14: Write Enc(Xjt+1) to the register containing current model of node j 15: For k 6= i, j, Xkt+1 = Xkt . 16: end for 14 happens atomically, but this sequence can cause a data race. To mitigate this, we can use a bounded non-blocking queue [Michael and Scott, 1996] at each node instead of a single buffer. Thus, instead of updating the buffer value atomically, each node simply appends the corresponding quantized model mean to j’s communication queue. In practice, this queue is extremely unlikely to be contended, since communication collisions are rare. 4 The Convergence of SwarmSGD Let µt = ∑n i=1X i t/n be the mean over node models at time t. Our main result is the following: Theorem 4.1. Assume the total number of steps T ≥ 10n, learning rate η = n/ √ T , quantization parameters R = 2 + T 3 d and = ηHM(R2+7) . Then, with probability at least 1 − O( 1 T ) we have that Algorithm 1 converges at rate 1 T T−1∑ t=0 E‖∇f(µt)‖2 ≤ 2(f(µ0)− f(x∗)) H √ T + 6(σ2 + 6Hς2)√ T + 12HM2√ T + C n2ρ3maxH 2L2M2 Tρminλ22 , for constant C, and uses O ( d log ( ρ2max ρminλ2 ) + log T ) expected communication bits per step. Discussion. First, this notion of convergence is standard in the non-convex case [Lian et al., 2015, 2017, 2018], and each of the upper bound terms has an intuitive interpretation: the first represents the reduction in loss relative to the initialization, and gets divided by the number of local steps H , since progress is made in this term in every local step; the second represents the noise due to stochasticity, and is naturally linear in H , as H steps are taken in expectation between two interactions. (Recall that in our model T is the number of interactions, and TH is the expected number of gradient steps.) The fourth term encodes overheads caused by local steps, quantization, and graph structure; however, it is usually seen as negligible (cf. [Lu and De Sa, 2020]), due to division by T . The third term is the critical one, as it implies a dependence on the second-moment bound. Intuitively, this term appears because our algorithm combines both non-blocking communication, and quantization: first, unlike prior work, we do not assume an explicit delay upper bound τ on communication; in conjunction with quantization, the unbounded delay this implies that our estimate on the model average µt may become dependent on M for large delays, which causes this dependency. While this limitation appears inherent, we are able to remove it if we eliminate quantization: in this case, we get a negligible dependency on M . We formalize this in Corollary 4.2. Second, if we focus on the total number of steps to reach some error bound, we notice an interesting trade-off between the linear reduction in H in the first term, due to local steps, and the linear increase in H in the other terms. Notice that, for dense and low-diameter graphs, such as the regular expanders popular in cluster networks, our convergence bound has no dependence in the graph parameters, and communication is linear in d. However, one limitation is that we could have a log n dependency in the communication for highly-irregular and poorly-connected graphs. Finally, note that time T here counts total interactions. However, Θ(n) pairwise interactions occur independently in parallel, and so we can slightly abuse notation and replace T by nT in the above formula, to obtain optimal Θ( √ n) speedup in terms of wall-clock time. Yet, this speedup is dampened by the variance due to noisy local gradient steps, a fact which we will revisit in the experimental section. Proof Overview. At a high level, the argument rests on two technical ideas. The first is that, in spite of noise and local steps, the nodes’ parameters remain concentrated around the mean µt. The second is to leverage this, and bound the impact of stochastic noise and model staleness on convergence. In particular, the main technical difficulty in the proof is to correctly “encode” the fact that parameters are well concentrated around the mean. A natural approach is to bound the model variance Γt after t interactions. Formally, we define Γt = ∑n i=1 ‖Xit − µt‖2, where µt = ∑n i=1X i t/n, as before. We bound the expected evolution of Γt over time, depending on the learning rate, number of local steps, quantization parameter and the bound provided by the assumption on the stochastic gradients (the bound M2). The critical point is that the upper bound on the expectation of Γt does not depend on the number of interactions t. More precisely, if all the above hyper-parameters are constant, we get that E[Γ(t)] = O(n). Our approach brings over tools from classic load-balancing [Berenbrink et al., 2009], to the multi-dimensional case. Three key elements of novelty in our case are that (1) for us the load balancing process is dynamic, in the sense that new loads, i.e. gradients, get continually added; (2) the load-balancing process we consider is multi-dimensional, whereas usually the literature considers simple scalar weights; (3) the models can be outdated and quantized, which leads to a complex, noisy load-balancing process. We resolve the this third and most challenging issue by using carefully-defined auxiliary potentials. Removing the Second-Moment Bound. Upon reflection, we notice that can render the dependency on M2 negligible if we do not use quantization, but otherwise keep the algorithm the same: Corollary 4.2. Given the previous assumptions and learning rate η = n/ √ T , for some constant C, we have that the Algorithm 1 where quantization is the identity converges at rate 1 T T−1∑ t=0 E‖∇f(µt)‖2 ≤ 2(f(µ0)− f(x∗)) H √ T + 6(σ2 + 6Hς2)√ T + Cn2ρ3maxH 2L2M2 Tρminλ22 . Notice that in this case all the term containing the second moment bound M2 is dampened by a factor of 1T , hence we can assume that Algorithm 1 converges at close-to optimal rate O ( 2(f(µ0)−f(x∗)) H √ T + 6H(σ 2+6ς2)√ T ) . This result still improves upon previous analyses [Lian et al., 2018, Assran et al., 2018, Lu and De Sa, 2020] in the sense that communication is completely nonblocking (there is no τ ), and we allow for local steps. Further, in the absence of quantization and assuming that the nodes perform single local gradient step, we can entirely remove assumption (4), when T is large enough (e.g for the fully connected graph we will need T ≥ Ω(n3)). More precisely, we can attain convergence rate ofO ( f(µ0)−f(x∗)√ T + (σ2+ς2)√ T + n3ρ4maxL 2(σ2+ς2) Tρminλ32 ) .We leave the proof of this last extension to the full version of this work. 5 Experimental Results In this section, we validate our analysis, by applying the algorithm to training deep neural networks for image classification and machine translation. We map the algorithm onto a multi-node supercomputing setting, in which we have a large number of compute nodes, connected by fast communication links. The key overhead in this setting is synchronization: at large node counts, the cost of synchronizing all nodes so they execute in lock-step can be very high, see e.g. [Li et al., 2019] for numerical results on different workloads. Transmission cost also becomes significant at large node counts and large model sizes. Decentralized methods can mitigate this overhead, since nodes synchronize only sporadically and in pairs. Target System and Implementation. We run SwarmSGD on the CSCS Piz Daint supercomputer, which is composed of Cray XC50 nodes, each with a Xeon E5-2690v3 CPU and an NVIDIA Tesla P100 GPU, using a state-of-the-art Aries interconnect over a Dragonfly network topology, which is regular. Please see Piz [2019] for more details. We implemented SwarmSGD in Pytorch and TensorFlow using MPI-based primitives, with non-blocking averaging. The Pytorch implementation is on top of SGP framework [Assran et al., 2018], and uses SwarmSGD to train ResNets on the CIFAR- 10/100 Krizhevsky et al. [2014] and ImageNet [Russakovsky et al., 2015] datasets, while we use the TensorFlow implementation to train the original version of the Transformer-XL model [Vaswani et al., 2017] on the WMT17 (En-Ge) dataset. All algorithms use the same topology overlay, which is fully-connected: according to their theory and experiments, this well-connected overlay should maximize convergence speed. SGP was run with overlap factor 1, following Assran et al. [2018]. Training Process. Our training methodology follows data-parallel training, with some differences due to decentralization, and is identical to previous work on decentralized and local SGD, e.g. Lian et al. [2017], Assran et al. [2018], Lin et al. [2018]. Training proceeds in epochs, each of which corresponds to processes collectively performing a full pass over the dataset. At the beginning of each epoch, we re-shuffle the dataset and partition it among processes [Lin et al., 2018]. As noted in previous work [Lian et al., 2017, 2018, Assran et al., 2018] variants of decentralized SGD are not always able to recover sequential SGD accuracy within the same number of epochs as this baseline. This is justified theoretically, as the slower mixing can affect convergence, but also intuitively, as each model sees significantly fewer updates per epoch. Thus, we will allow the decentralized schemes to execute for more epochs, by a constant multiplier factor between 1 and 3. To reduce multipliers, we experimented with SlowMo [Wang et al., 2019]; we found that it improved results across methods on CIFAR-10, but not at ImageNet scale; therefore, the provided results do not include it. Once we have fixed the number of epochs, we do not alter the other training hyperparameters: in particular, the learning rate schedule, momentum and weight decay terms are identical to the standard values for sequential SGD, for each individual model. Accuracy and Speed. We first examined whether SwarmSGD can in fact recover full accuracy versus the sequential or large-batch SGD baselines. In Table 1 we provide an overview of parameter values to recover large-batch SGD accuracy (following Goyal et al. [2017]) using SwarmSGD, on the ResNet, ImageNet and CIFAR tasks. We execute for 32 nodes on ImageNet, and 8 nodes on CIFAR-10. (Local batch sizes are 128 for ResNet20 and ResNet50, and 128 for ResNet18. Quantization is not applied in these experiments.) The results show that Swarm can recover or slightly exceed the accuracy of the large-batch baselines, and that it has lower practical communication cost relative to existing methods (see Figure 2(b), where we separate the average computation cost per batch). However, Swarm requires significant additional passes over the data (up to 2.7×) to achieve full accuracy, which negates its performance benefits in this specific setting, relative to large-batch SGD. (Please see the Supplementary for end-to-end time comparisons.) This partly negative finding is in line with previous work on decentralized methods [Assran et al., 2018]. Next, we examine accuracy for the WMT17 task. The results are provided in Figure 1(a), in accuracy-vs-time format, for 16 and 32 nodes, executing for 10 global epochs. Here, the large-batch SGD (LB-SGD) baseline (BLEU score 26.1 at 16 nodes) is a poor alternative at high node counts due to model size: its throughput is low, and drops catastrophically at 64 nodes due to the network becoming severely bandwdith-bottlenecked (see Figure 1(b)). At 16 nodes, Swarm slightly exceeds the baseline accuracy at 26.17 BLEU, for an end-to-end speedup of ∼ 1.5×. In the same setting, Swarm outperforms all other decentralized methods (the fastest previous method, AD-PSGD, is 30% slower, and less accurate), both in terms of BLEU score, and in terms of end-to-end time.(The objective loss graph is similar, and is provided in the Appendix). At 32 nodes, all decentralized methods reach lower scores (∼ 23.5) after 10 epochs. However, we observed experimentally that running Swarm for an additional 5 epochs (multiplier 1.5) at 32 nodes recovered a BLEU score of ∼ 25.72, which is 30% faster than the 16-node version in terms of end-to-end time (omitted for visibility). In addition, we investigated 1) the accuracy of the real average of all models throughout training: it is usually more accurate than an arbitrary model, but not significantly so, corroborating the claim that individual models tend to stay close to the mean; 2) the influence of the number of local steps on accuracy: perhaps surprisingly, we were able to recover baseline accuracy on ResNet18/ImageNet for up to 4 local steps (see Figure 2(a)); 3) the impact of quantization on convergence, where we were able to recover accuracy when applying 8-bit model quantization to Swarm. We encourage the reader to examine the full experimental report in the Appendix, which contains data on these experiments, as well as additional ablation studies. Discussion. Generally, the performance of SwarmSGD appears to be slighly superior to previous decentralized methods (see Figure 1 for an illustration, and Figure 2(b) for a performance breakdown). We investigated this advantage, and found that the per-step communication cost of Swarm, without quantization, is similar to AD-PSGD; however, our algorithm benefits from the reduction in communication frequency: nodes communicate at least 2x less often, and therefore incur lower average communication cost. In particular, a closer examination of the average batch times in Figure 2(b) shows that time per node per batch (including communication and computation) is largely constant as we increase the number of nodes, which suggests good scaling behaviour. The main disadvantage of Swarm is that, similar to previous decentralized methods, it may need additional data passes in order to fully recover accuracy at high node counts. However, we also note that our method did not benefit from the high level of hyperparameter tuning applied to large-batch SGD, e.g. Goyal et al. [2017]. We find it interesting that this accuracy issue is less prevalent in the context of large, over-parameterized models, such as the Transformer, where Swarm can be a viable alternative to large-batch SGD within the same number of epochs. 6 Conclusions and Future Work We analyzed the convergence of SGD in an extremely decoupled model of distributed computing, in which nodes mostly perform independent SGD updates, interspersed with intermittent pairwise averaging steps, which may be performed in an inconsistent and noisy manner. We showed that SGD still converges in this restrictive setting, even under consistency relaxations. Empirical results complement our analysis, showing that this method can outperform previous decentralized algorithms, and can even be competitive against large-batch SGD for very large models. A natural extension would be to generalize the bounds to arbitrary communication graphs. From the practical perspective, one extension would be to reduce the additional training epochs, and to experiment on large-scale decentralized testbeds. Acknowledgments and Disclosure of Funding We gratefully acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML). PD partly conducted this work while at IST Austria and was supported by the European Union’s Horizon 2020 programme under the Marie Skłodowska-Curie grant agreement No. 754411. SL was funded in part by European Research Council (ERC) under the European Union’s Horizon 2020 programme (grant agreement DAPP, No. 678880, and EPiGRAM-HS, No. 801039).
1. What is the focus of the paper regarding asynchronous decentralized optimization, and what are the key contributions? 2. What are the strengths of the proposed approach, particularly in its compression algorithm and convergence rates? 3. Do you have any questions or concerns about the paper's assumptions, such as the second moment bound on gradients? 4. How does the reviewer assess the paper's clarity, quality, novelty, and reproducibility? 5. Are there any limitations or potential improvements regarding the SwarmSGD method?
Summary Of The Paper Review
Summary Of The Paper This paper discusses a new variant for asynchronous decentralized SGD called SwarmSGD. Different from many previous algorithms SwarmSGD does not suffer from deadlocks. A new compression algorithm is used in SwarmSGD, which is different from commonly used ones such as stochastic rounding. Convergence rates of SwarmSGD are given with discussion on the assumption of second moment bound on the gradients, and experiments on language modeling and ImageNet are conducted. Review The paper addresses an important question in asynchronous decentralized optimization, and is well written. The content is generally easy to follow. Pros: The application of (Davies et al., 2021) in decentralized optimization seems interesting. The idea of compressing the distance of model rather than model itself is well-motivated. The paper provides detailed comparison to previous algorithms such as AD-PSGD and SGP (some of them are in the appendix), which clearly states the contribution and the challenges. I didn't check all the proof line by line, but the overall logic and the asymptotic complexity seems reasonable to me. The experiments are comparatively large-scale compared to other algorithm-type papers in the community. Cons: In Theorem 4.1, the last term in the convergence rate is a little counter-intuitive. If SwarmSGD runs on a fully connected graph, it is expected to converge faster since the spectral gap increases. However, if we replace ρ m a x = n from complete graph into the last term, it becomes O ( n 5 ) ; and if in such case we need the 1 / T to be the leading term, we would require T ≥ O ( n 10 ) . This seems a little diverged from reality, could you elaborate why using a better graph worsen the convergence rate? I'm a little confused by the statement of non-blocking and staleness in this paper. For starters, if worker i is updating the communication buffer of worker j while worker j needs to read from its communication buffer for interacting with worker m , it should be waiting for i to finish writing the buffer. Otherwise, it causes additional error like the Hogwild setting. A completely asynchronous execution should be one-sided writing such as ADDP algorithm in this paper: (https://arxiv.org/pdf/1901.08215.pdf). In fact, the line 4 in Algorithm 1 already implicitly assumes some sort of blocking: two workers cannot be sampled simultaneously and the updates in one iteration is completely atomic. Same thing applies to the staleness: if node i is sampled uniformly, a staleness bound is implicitly placed. In the abstract the author states the algorithm works for heterogeneous data case, which does not seem to hold in the main theorem. Specifically, a bound of ς 2 is still assumed in Equation (3) and shown in all the convergence rate. An algorithm robust to the heterogeneous case should be invariant to this bound. See the Theorem 2 of D 2 : http://proceedings.mlr.press/v80/tang18a/tang18a.pdf. A minor concern: In AD-PSGD, the T is the number of query to gradient oracles while in this paper, T is defined as the total number of interactions. Since each node is performing H local steps. For fair comparison, the T in convergence rate should be replaced by T H . However, this calibrated rate seems to have an additional H term on the sample complexity term σ 2 . Why having local steps if it compromises the convergence rate while not having explicit benefits in the experiments? Updates I thank the authors for adequately addressing my concerns. I suggest including some of these points in the paper to make it clearer.
NIPS
Title Shape and Material from Sound Abstract Hearing an object falling onto the ground, humans can recover rich information including its rough shape, material, and falling height. In this paper, we build machines to approximate such competency. We first mimic human knowledge of the physical world by building an efficient, physics-based simulation engine. Then, we present an analysis-by-synthesis approach to infer properties of the falling object. We further accelerate the process by learning a mapping from a sound wave to object properties, and using the predicted values to initialize the inference. This mapping can be viewed as an approximation of human commonsense learned from past experience. Our model performs well on both synthetic audio clips and real recordings without requiring any annotated data. We conduct behavior studies to compare human responses with ours on estimating object shape, material, and falling height from sound. Our model achieves near-human performance. 1 Introduction From a short audio clip of interacting objects, humans can recover the number of objects involved, as well as their materials and surface smoothness [Zwicker and Fastl, 2013, Kunkler-Peck and Turvey, 2000, Siegel et al., 2014]. How does our cognitive system recover so much content from so little? What is the role of past experience in understanding auditory data? For physical scene understanding from visual input, recent behavioral and computational studies suggest that human judgments can be well explained as approximate, probabilistic simulations of a mental physics engine [Battaglia et al., 2013, Sanborn et al., 2013]. These studies suggest that the brain encodes rich, but noisy, knowledge of physical properties of objects and basic laws of physical interactions between objects. To understand, reason, and predict about a physical scene, humans seem to rely on simulations from this mental physics engine. In this paper, we develop a computational system to interpret audio clips of falling objects, inspired by the idea that humans may use a physics engine as part of a generative model to understand the physical world. Our generative model has three components. The first is a object representation that includes its 3D shape, position in space, and physical properties such as mass, Young’s modulus, Rayleigh damping coefficients, and restitution. We aim to infer all these attributes from auditory inputs. The second component is an efficient, physics-based audio synthesis engine. Given an initial scene setup and object properties, the engine simulates the object’s motion and generates its trajectory using rigid body physics. It also produces the corresponding collision profile — when, where, and how collisions happen. The object’s trajectory and collision profile are then combined with its pre-computed sound statistics to generate the sound it makes during the physical event. With this efficient forward model, we can then infer object properties using analysis-by-synthesis; for each audio clip, we want to find a set of latent variables that best reproduce it. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. The third component of the model is therefore a likelihood function that measures the perceptual distance between two sounds. Designing such a likelihood function is typically challenging; however, we observe that features like spectrogram are effective when latent variables have limited degrees of freedom. This motivates us to infer latent variables via methods like Gibbs sampling, where we focus on approximating the conditional probability of a single variable given the others. The inference procedure can be further accelerated with a self-supervised learning paradigm inspired by the wake/sleep phases in Helmholtz machines [Dayan et al., 1995]. We train a deep neural network as the recognition model to regress object properties from sound, where training data are generated using our inference algorithm. Then, for any future audio clip, the output of the recognition model can be used as a good initialization for the sampling algorithm to converge faster. We evaluate our models on a range of perception tasks: inferring object shape, material, and initial height from sound. We also collect human responses for each task and compare them with model estimates. Our results indicate that first, humans are quite successful in these tasks; second, our model not only closely matches human successes, but also makes similar errors as humans do. For these quantitative evaluations, we have mostly used synthetic data, where ground truth labels are available. We further evaluate the model on recordings to demonstrate that it also performs well on real-world audios. We make three contributions in this paper. First, we propose a novel model for estimating physical object properties from auditory inputs by incorporating the feedback of a physics engine and an audio engine into the inference process. Second, we incorporate a deep recognition network with the generative model for more efficient inference. Third, we evaluate our model and compare it to humans on a variety of judgment tasks, and demonstrate the correlation between human responses and model estimates. 2 Related Work Human visual and auditory perception Psychoacoustics researchers have explored how humans can infer object properties, including shape, material and size, from audio in the past decades [Zwicker and Fastl, 2013, Kunkler-Peck and Turvey, 2000, Rocchesso and Fontana, 2003, Klatzky et al., 2000, Siegel et al., 2014]. Recently, McDermott et al. [2013] proposed compact sound representations that capture semantic information and are informative of human auditory perception. Sound simulation Our sound synthesis engine builds upon and extends existing sound simulation systems in computer graphics and computer vision [O’Brien et al., 2001, 2002, James et al., 2006, Bonneel et al., 2008, Van den Doel and Pai, 1998, Zhang et al., 2017]. Van den Doel and Pai [1998] simulated object vibration using the finite element method and approximated the vibrating object as a single point source. O’Brien et al. [2001, 2002] used the Rayleigh method to approximate wave equation solutions for better synthesis quality. James et al. [2006] proposed to solve Helmholtz equations using the Boundary Element Method, where each object’s vibration mode is approximated by a set of vibrating points. Recently, Zhang et al. [2017] built a framework for synthesizing largescale audio-visual data. In this paper, we accelerate the framework by Zhang et al. [2017] to achieve near real-time rendering, and explore learning object representations from sound with the synthesis engine in the loop. Physical Object Perception There has been a growing interest in understanding physical object properties, like mass and friction, from visual input or scene dynamics [Chang et al., 2017, Battaglia et al., 2016, Wu et al., 2015, 2016, 2017]. Much of the existing research has focused on inferring object properties from visual data. Recently, researchers have begun to explore learning object representations from sound. Owens et al. [2016a] attempted to infer material properties from audio, focusing on the scenario of hitting objects with a drumstick. Owens et al. [2016b] further demonstrated audio signals can be used as supervision on learning object concepts from visual data, and Aytar et al. [2016] proposed to learn sound representations from corresponding video frames. Zhang et al. [2017] discussed the complementary role of auditory and visual data in recovering both geometric and physical object properties. In this paper, we learn physical object representations from audio through a combination of powerful deep recognition models and analysis-by-synthesis inference methods. Analysis-by-synthesis Our framework also relates to the field of analysis-by-synthesis, or generative models with data-driven proposals [Yuille and Kersten, 2006, Zhu and Mumford, 2007, Wu et al., 2015], as we are incorporating a graphics engine as a black-box synthesizer. Unlike earlier methods that focus mostly on explaining visual data, our work aims to infer latent parameters from auditory data. Please refer to Bever and Poeppel [2010] for a review of analysis-by-synthesis methods. 3 An Efficient, Physics-Based Audio Engine At the core of our inference pipeline is an efficient audio synthesis engine. In this section, we first give a brief overview of existing synthesis engines, and then present our technical innovations on accelerating them for real-time rendering in our inference algorithm. 3.1 Audio Synthesis Engine Audio synthesis engines generate realistic sound by simulating physics. First, rigid body simulation produces the interaction between an object and the environment, where Newton’s laws dictate the object’s motion and collisions over time. Each collision causes the object to vibrate in certain patterns, changing the air pressure around its surface. These vibrations propagate in air to the recorder and create the sound of this physical process. Rigid Body Simulation Given an object’s 3D position and orientation, and its mass and restitution, a physics engine can simulate the physical processes and output the object’s position, orientation, and collision information over time. Our implementation uses an open-source physics engine, Bullet [Coumans, 2010]. We use a time step of 1/300 second to ensure simulation accuracy. At each time step, we record the 3D pose and position of the object, as well as the location, magnitude, and direction of collisions. The sound made by the object can then be approximated by accumulating sounds caused by those discrete impulse collisions on its surface. Audio Synthesis The audio synthesis procedure is built upon previous work on simulating realistic sounds [James et al., 2006, Bonneel et al., 2008, O’Brien et al., 2001]. To facilitate fast synthesis, this process is decomposed into two modules, one offline and one online. The offline part first uses finite element methods (FEM) to obtain the object’s vibration modes, which depend on the shape and Young’s modulus of the object. These vibration modes are then used as Neumann boundary conditions of the Helmholtz equation, which can be solved using boundary element methods (BEM). We use the techniques proposed by James et al. [2006] to approximate the solution by modeling the pressure fields with a sparse set of vibrating points. Note that the computation above only depends on object’s intrinsic properties such as shape and Young’s modulus, but not on the extrinsics such as its position and velocity. This allows us to pre-compute a number of shape-modulus configurations before simulation; only minimal computation is needed during the online simulation. The online part of the audio engine loads pre-computed approximations and decomposes impulses on the surface mesh of the object into its modal bases. At the observation point, the engine measures the pressure changes induced by vibrations in each mode, and sums them up to produce the simulated sound. An evaluation of the fidelity of these simulations can be found in Zhang et al. [2017]. 3.2 Accelerating Audio Synthesis Analysis-by-synthesis inference requires the audio engine to be highly efficient; however, a straightforward implementation of the above simulation procedure would be computationally expensive. We therefore present technical innovations to accelerate the computation to near real-time. First, we select the most significant modes excited by each impulse until their total energy reaches 90% of the energy of the impulse. Ignoring sound components generated by the less significant modes reduces the computational time by about 50%. Second, we stop the synthesis process if the amplitude of the damped sound goes below a certain threshold, since it is unlikely to be heard. Third, we parallelize the synthesis process by tackling collisions separately, so that each can computed on an independent thread. We then integrate them into a shared buffer to generate the final audio according to their timestamps. The effect of acceleration is shown in Table 1. Online sound synthesis only contains variables that are fully decoupled from the offline stage, which enables us to freely manipulate other variables with little computational cost during simulation. 3.3 Generating Stimuli Because real audio recordings with rich labels are hard to acquire, we synthesize random audio clips using our physics-based simulation to evaluate our models. Specifically, we focus on a single scenario — shape primitives falling onto the ground. We first construct an audio dataset that includes 14 primitives (some shown in Table 2), each with 10 different specific moduli (defined as Young’s modulus over density). After pre-computing their space-modulus configurations, we can generate synthetic audio clips in a near-real-time fashion. Because the process of objects falling onto the ground is relatively fast, we set the total simulation time of each scenario to 3 seconds. Details of our setup can be found in Table 2. 4 Inference In this section, we investigate four models for inferring object properties, each corresponding to a different training condition. Inspired by how humans can infer scene information using a mental physics engine [Battaglia et al., 2013, Sanborn et al., 2013], we start from an unsupervised model where the input is only one single test case with no annotation. We adopt Gibbs sampling over latent variables to find the combination that best reproduces the given audio. We then extend the model to include a deep neural network, analogous to what humans may learn from their past experience. The network is trained using labels inferred by the unsupervised model. During inference, the sampling algorithm uses the network prediction as the initialization. This self-supervised learning paradigm speeds-up convergence. We also investigate a third case, when labels can be acquired but are extremely coarse. We first train a recognition model with weak labels, then randomly pick candidates from those labels as an initialization for our analysis-by-synthesis inference. Lastly, to understand performance limits, we train a deep neural network with fully labeled data that yields the upper-bound performance. 4.1 Models Unsupervised Given an audio clip S, we would like to recover the latent variables x to make the reproduced sound g(x) most similar to S. Let L(·, ·) be a likelihood function that measures the perceptual distance between two sounds, then our goal is to maximize L(g(x), S). We denote L(g(x), S) as p(x) for brevity. In order to find x that maximizes p(x), p(x) can be treated as an distribution p̂(x) scaled by an unknown partition function Z. Since we do not have an exact form for p(·), nor p̂(x), we apply Gibbs sampling to draw samples from p(x). Specifically, at sweep round t, we update each variable xi by drawing samples from p̂(xi|xt1, xt2, ...xti−1, xt−1i+1, ...x t−1 n ). (1) Such conditional probabilities are straightforward to approximate. For example, to sample Young’s modulus conditioned on other variables, we can use the spectrogram as a feature and measure the l2 distance between the spectrograms of two sounds, because Young’s modulus will only affect the frequency at each collision. Indeed, we can use the spectrogram as features for all variables except height. Since the height can be inferred from the time of the first collision, a simple likelihood function can be designed as measuring the time difference between the first impact in two sounds. Note that this is only an approximate measure: object’s shape and orientation also affect, although only slightly, the time of first impact. To sample from the conditional probabilities, we adopt the Metropolis–Hastings algorithm, where samples are drawn from a Gaussian distribution and are accepted by flipping a biased coin according to its likelihood compared with the previous sample. Specifically, we calculate the l2 distance dt in feature space between g(xt) and S. For a new sample xt+1, we also calculate the l2 distance dt+1 in feature space between g(xt+1) and S. The new sample is accepted if dt+1 is smaller than dt; otherwise, xt+1 is accepted with probability exp(−(dt+1 − dt)/T ), where T is a time varying function inspired by simulated annealing algorithm. In our implementation, T is set as a quadratic function of the current MCMC sweep number t. Self-supervised Learning To accelerate the above sampling process, we propose a self-supervised model, which is analogous to a Helmholtz machine trained by the wake-sleep algorithm. We first train a deep neural network, whose labels are generated by the unsupervised inference model suggested above for a limited number of iterations. For a new audio clip, our self-supervised model uses the result from the neural network as an initialization, and then runs our analysis-by-synthesis algorithm to refine the inference. By making use of the past experiences which trained the network, the sampling process starts from a better position and requires fewer iterations to converge than the unsupervised model. Weakly-supervised Learning We further investigate the case where weak supervision might be helpful for accelerating the inference process. Since the latent variables we aim to recover are hard to obtain in real world settings, it is more realistic to assume that we could acquire very coarse labels, such as the type of material, rough attributes of the object’s shape, the height of the fall, etc. Based on such assumptions, we coarsen ground truth labels for all variables. For our primitive shapes, three attributes are defined, namely “with edge,” “with curved surface,” and “pointy.” For material parameters, i.e., specific modulus, Rayleigh damping coefficients and restitution, they are mapped to steel, ceramic, polystyrene and wood by finding the nearest neighbor to those real material parameters. Height is divided into “low” and “high” categories. A deep convolutional neural network is trained on our synthesized dataset with coarse labels. As shown in Figure 4, even trained using coarse labels, our network learns features very similar to the ones learned by the fully supervised network. To go beyond coarse labels, the unsupervised model is applied using the initialization suggested by the neural network. Fully-supervised Learning To explore the performance upper bound of the inference tasks, we train an oracle model with ground truth labels. To visualize the abstraction and characteristic features learned by the oracle model, we plot the inputs that maximally activate some hidden units in the last layer of the network. Figure 4 illustrates some of the most interesting waveforms. A selection of them learned to recognize specific temporal patterns, and others were sensitive to specific frequencies. Similar patterns were found in the weakly and fully supervised models. 4.2 Contrasting Model Performance We evaluate how well our model performs under different settings, studying how past experience or coarse labeling can improve the unsupervised results. We first present the implementation details of all four models, then compare their results on all inference tasks. Sampling Setup We perform 80 sweeps of MCMC sampling over all the 7 latent variables; for every sweep, each variable is sampled twice. Shape, specific modulus and rotation are sampled by uniform distributions across their corresponding dimensions. For other continuous variables, we define an auxiliary Gaussian variable xi ∼ N (µi, σ2i ) for sampling, where the mean µi is based on the current state. To evaluate the likelihood function between the input and the sampled audio (both with sample rate of 44.1k), we compute the spectrogram of the signal using a Tukey window of length 5,000 with a 2,000 sample overlap. For each window, a 10,000 point Fourier transform is applied. Deep Learning Setup Our fully supervised and self-supervised recognition models use the architecture of SoundNet-8 [Aytar et al., 2016] as Figure 3, which takes an arbitrarily long raw audio wave as an input, and produces a 1024-dim feature vector. We append to that a fully connected layer to produce a 28-dim vector as the final output of the neural network. The first 14 dimensions are the one-hot encoding of primitive shapes and the next 10 dimensions are encodings of the specific modulus. The last 4 dimensions regress the initial height, the two Rayleigh damping coefficients and the restitution respectively. All the regression dimensions are normalized to a [−1, 1] range. The weakly supervised model preserves the structure of the fully supervised one, but with an 8-dim final output: 3 for shape attributes, 1 for height, and 4 for materials. We used stochastic gradient descent for training, with a learning rate of 0.001, a momentum of 0.9 and a batch size of 16. Mean Square Error(MSE) loss is used for back-propagation. We implemented our framework in Torch7 [Collobert et al., 2011], and trained all models from scratch. Results Results for the four inference models proposed above are shown in Table 3. For shapes and specific modulus, we evaluate the results as classification accuracies; for height, Rayleigh damping coefficients, and restitution, results are evaluated as MSE. Before calculating MSE, we normalize values of each latent variable to [−1, 1] interval, so that the MSE score is comparable across variables. From Table 3, we can conclude that self-supervised and weakly supervised models benefit from the better initialization to the analysis-by-synthesis algorithm, especially on the last four continuous latent variables. One can also observe that final inference accuracies and MSEs are marginally better than for the unsupervised case. To illustrate the rate of convergence, we plot the likelihood value, exp(−kd) where d is the distance of sound features, along iterations of MCMC in Figure 5. The mean curve of self-supervised model meets our expectation, i.e., it converges much faster than the unsupervised model, and reaches a slightly higher likelihood at the end of 30 iterations. The fully supervised model, which is trained on 200,000 audios with the full set of ground-truth labels, yields near-perfect results for all latent variables. 5 Evaluations We first evaluate the performance of our inference procedure by comparing its performance with humans. The evaluation is conducted using synthetic audio with their ground truth labels. Then, we investigate whether our inference algorithm performs well on real-world recordings. Given recorded audio, our algorithm can distinguish the shape from a set of candidates. 5.1 Human Studies We seek to evaluate our model relative to human performance. We designed three tasks for our subjects: inferring the object’s shape, material and height-of-fall from the sound, intuitive attributes when hearing an object fall. Those tasks are designed to be classification problems, where the labels are in accordance with coarse labels used by our weakly-supervised model. The study was conducted on Amazon Mechanical Turk. For each experiment (shape, material, height), we randomly selected 52 test cases. Before answering test questions, the subject is shown 4 training examples with ground truth as familiarization of the setup. We collected 192 responses for the experiment on inferring shape, 566 for material, and 492 for height, resulting in a total of 1,250 responses. Inferring Shapes After becoming familiar with the experiment, participants are asked to make three binary judgments about the shape by listening to our synthesized audio clip. Prior examples are given for people to understand the distinctions of “with edge,” “with curved surface,” and “pointy” attributes. As shown in Figure 6, humans are relatively good at recognizing shape attributes from sound and are around the same level of competency when the unsupervised algorithm runs for 10∼30 iterations. Inferring Materials We sampled audio clips whose physical properties – density, Young’s modulus and damping coefficients – are in the vicinity of true parameters of steel, ceramic, polystyrene and wood. Participants are required to choose one out of four possible materials. However, it can still be challenging to distinguish between materials, especially when sampled ones have similar damping and specific modulus. Our algorithm confuses steel with ceramic and ceramic with polystyrene occasionally, which is in accordance with human performance, as shown in Figure 5. Inferring Heights In this task, we ask participants to choose whether the object is dropped from a high position or a low one. We provided example videos and audios to help people anchor reference height. Under our scene setup, the touchdown times of the two extremes of the height range differ by 0.2s. To address the potential bias that algorithms may be better at exploiting falling time, we explicitly told humans that the silence at the beginning is informative. Second, we make sure that the anchoring example is always available during the test, which participants can always compare and refer to. Third, the participant has to play each test clip manually, and therefore has control over when the audio begins. Last, we tested on different object shapes. Because the time of first impact is shape-dependent, differently shaped objects dropped from the same height would have first impacts at different times, making it harder for the machine to exploit the cue. 5.2 Transferring to Real Scenes In addition to the synthetic data, we designed real world experiments to test our unsupervised model. We select three candidate shapes: tetrahedron, octahedron, and dodecahedron. We record the sound a metal octahedron dropping on a table and used our unsupervised model to recover the latent variables. Because the real world scenarios may introduce highly complex factors that cannot be exactly modeled in our simulation, a more robust feature and a metric are needed. For every audio clip, we use its temporal energy distribution as the feature, which is derived from spectrogram. A window of 2,000 samples with a 1,500 sample overlap is used to calculate the energy distribution. Then, we use the earth mover’s distance (EMD) [Rubner et al., 2000] as the metric, which is a natural choice for measuring distances between distributions. The inference result is illustrated in Figure 7. Using the energy distribution with EMD distance measure, our generated sound aligns its energy at major collision events with the real audio, which greatly reduces ambiguities among the three candidate shapes. We also provide our normalized likelihood function overtime to show our sampling has converged to produce highly probable samples. 6 Conclusion In this paper, we propose a novel model for estimating physical properties of objects from auditory inputs, by incorporating the feedback of an efficient audio synthesis engine in the loop. We demonstrate the possibility of accelerating inference with fast recognition models. We compare our model predictions with human responses on a variety of judgment tasks and demonstrate the correlation between human responses and model estimates. We also show that our model generalizes to some real data. Acknowledgements The authors would like to thank Changxi Zheng, Eitan Grinspun, and Josh H. McDermott for helpful discussions. This work is supported by NSF #1212849 and #1447476, ONR MURI N00014-16-12007, Toyota Research Institute, Samsung, Shell, and the Center for Brain, Minds and Machines (NSF STC award CCF-1231216).
1. What is the main contribution of the paper regarding shape and material recognition from sounds? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its generative model, inference process, and use of physics simulation? 3. How does the reviewer assess the clarity and depth of the presentation, especially regarding the four variants of machine learning algorithms used? 4. What are the limitations of the paper regarding its experimental design and lack of real data usage? 5. How does the reviewer evaluate the significance and impact of the paper's findings, particularly in relation to its novelty and potential applications?
Review
Review This paper contains a lot of ideas and methods packed into a tightly wound package - less would have been certainly more: The general aim is to build a system that can mimick the human ability to recognise shape and material objects from their sounds. Certainly a nice idea to explore within a NIPS community. To this end a generative model is defined, and a process for inference is proposed ("Physics-Based Audio Engine") that uses a physics simulation, this physics simulation is coupled (somehow) to a sound generation algorithm. This appears to use (somehow) the physics simulation engine representation of the vibrating surfaces of the objects to render the sounds. This engine is then used to create training stimuli for 4 variants of machine learning algorithms to learn appropriate representations. Again the descriptions are rather curt, so it is difficult to understand how the algorithms actually operate and how they perform. Into the paper mix we also have now human experiments that aims to compare the algorithms with that of the human observer. A description of how the experiment was conducted with human users is missing, however human users are almost always outperformed by the model. While it is up to this point unclear why the authors did not use real data when assessing their models, we are treated to real at the end (Figure 7) but without any meaningful information for us to understand how their method performs. In my opinion these are the major issues 1. Too much information therefore too superficially presented. 2. It is unclear how good the generative model is, comparison to real data would have been help here in anchoring it first. 3. The use of 4 different algorithms for classification is a nice effort, but it remains unclear to me what the point is (do the authors have a clear hypothesis?) 4. The human data is in itself interesting, but at this point it appears unclear how it was obtained, and how it informs us further on the work done in the paper.
NIPS
Title Shape and Material from Sound Abstract Hearing an object falling onto the ground, humans can recover rich information including its rough shape, material, and falling height. In this paper, we build machines to approximate such competency. We first mimic human knowledge of the physical world by building an efficient, physics-based simulation engine. Then, we present an analysis-by-synthesis approach to infer properties of the falling object. We further accelerate the process by learning a mapping from a sound wave to object properties, and using the predicted values to initialize the inference. This mapping can be viewed as an approximation of human commonsense learned from past experience. Our model performs well on both synthetic audio clips and real recordings without requiring any annotated data. We conduct behavior studies to compare human responses with ours on estimating object shape, material, and falling height from sound. Our model achieves near-human performance. 1 Introduction From a short audio clip of interacting objects, humans can recover the number of objects involved, as well as their materials and surface smoothness [Zwicker and Fastl, 2013, Kunkler-Peck and Turvey, 2000, Siegel et al., 2014]. How does our cognitive system recover so much content from so little? What is the role of past experience in understanding auditory data? For physical scene understanding from visual input, recent behavioral and computational studies suggest that human judgments can be well explained as approximate, probabilistic simulations of a mental physics engine [Battaglia et al., 2013, Sanborn et al., 2013]. These studies suggest that the brain encodes rich, but noisy, knowledge of physical properties of objects and basic laws of physical interactions between objects. To understand, reason, and predict about a physical scene, humans seem to rely on simulations from this mental physics engine. In this paper, we develop a computational system to interpret audio clips of falling objects, inspired by the idea that humans may use a physics engine as part of a generative model to understand the physical world. Our generative model has three components. The first is a object representation that includes its 3D shape, position in space, and physical properties such as mass, Young’s modulus, Rayleigh damping coefficients, and restitution. We aim to infer all these attributes from auditory inputs. The second component is an efficient, physics-based audio synthesis engine. Given an initial scene setup and object properties, the engine simulates the object’s motion and generates its trajectory using rigid body physics. It also produces the corresponding collision profile — when, where, and how collisions happen. The object’s trajectory and collision profile are then combined with its pre-computed sound statistics to generate the sound it makes during the physical event. With this efficient forward model, we can then infer object properties using analysis-by-synthesis; for each audio clip, we want to find a set of latent variables that best reproduce it. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. The third component of the model is therefore a likelihood function that measures the perceptual distance between two sounds. Designing such a likelihood function is typically challenging; however, we observe that features like spectrogram are effective when latent variables have limited degrees of freedom. This motivates us to infer latent variables via methods like Gibbs sampling, where we focus on approximating the conditional probability of a single variable given the others. The inference procedure can be further accelerated with a self-supervised learning paradigm inspired by the wake/sleep phases in Helmholtz machines [Dayan et al., 1995]. We train a deep neural network as the recognition model to regress object properties from sound, where training data are generated using our inference algorithm. Then, for any future audio clip, the output of the recognition model can be used as a good initialization for the sampling algorithm to converge faster. We evaluate our models on a range of perception tasks: inferring object shape, material, and initial height from sound. We also collect human responses for each task and compare them with model estimates. Our results indicate that first, humans are quite successful in these tasks; second, our model not only closely matches human successes, but also makes similar errors as humans do. For these quantitative evaluations, we have mostly used synthetic data, where ground truth labels are available. We further evaluate the model on recordings to demonstrate that it also performs well on real-world audios. We make three contributions in this paper. First, we propose a novel model for estimating physical object properties from auditory inputs by incorporating the feedback of a physics engine and an audio engine into the inference process. Second, we incorporate a deep recognition network with the generative model for more efficient inference. Third, we evaluate our model and compare it to humans on a variety of judgment tasks, and demonstrate the correlation between human responses and model estimates. 2 Related Work Human visual and auditory perception Psychoacoustics researchers have explored how humans can infer object properties, including shape, material and size, from audio in the past decades [Zwicker and Fastl, 2013, Kunkler-Peck and Turvey, 2000, Rocchesso and Fontana, 2003, Klatzky et al., 2000, Siegel et al., 2014]. Recently, McDermott et al. [2013] proposed compact sound representations that capture semantic information and are informative of human auditory perception. Sound simulation Our sound synthesis engine builds upon and extends existing sound simulation systems in computer graphics and computer vision [O’Brien et al., 2001, 2002, James et al., 2006, Bonneel et al., 2008, Van den Doel and Pai, 1998, Zhang et al., 2017]. Van den Doel and Pai [1998] simulated object vibration using the finite element method and approximated the vibrating object as a single point source. O’Brien et al. [2001, 2002] used the Rayleigh method to approximate wave equation solutions for better synthesis quality. James et al. [2006] proposed to solve Helmholtz equations using the Boundary Element Method, where each object’s vibration mode is approximated by a set of vibrating points. Recently, Zhang et al. [2017] built a framework for synthesizing largescale audio-visual data. In this paper, we accelerate the framework by Zhang et al. [2017] to achieve near real-time rendering, and explore learning object representations from sound with the synthesis engine in the loop. Physical Object Perception There has been a growing interest in understanding physical object properties, like mass and friction, from visual input or scene dynamics [Chang et al., 2017, Battaglia et al., 2016, Wu et al., 2015, 2016, 2017]. Much of the existing research has focused on inferring object properties from visual data. Recently, researchers have begun to explore learning object representations from sound. Owens et al. [2016a] attempted to infer material properties from audio, focusing on the scenario of hitting objects with a drumstick. Owens et al. [2016b] further demonstrated audio signals can be used as supervision on learning object concepts from visual data, and Aytar et al. [2016] proposed to learn sound representations from corresponding video frames. Zhang et al. [2017] discussed the complementary role of auditory and visual data in recovering both geometric and physical object properties. In this paper, we learn physical object representations from audio through a combination of powerful deep recognition models and analysis-by-synthesis inference methods. Analysis-by-synthesis Our framework also relates to the field of analysis-by-synthesis, or generative models with data-driven proposals [Yuille and Kersten, 2006, Zhu and Mumford, 2007, Wu et al., 2015], as we are incorporating a graphics engine as a black-box synthesizer. Unlike earlier methods that focus mostly on explaining visual data, our work aims to infer latent parameters from auditory data. Please refer to Bever and Poeppel [2010] for a review of analysis-by-synthesis methods. 3 An Efficient, Physics-Based Audio Engine At the core of our inference pipeline is an efficient audio synthesis engine. In this section, we first give a brief overview of existing synthesis engines, and then present our technical innovations on accelerating them for real-time rendering in our inference algorithm. 3.1 Audio Synthesis Engine Audio synthesis engines generate realistic sound by simulating physics. First, rigid body simulation produces the interaction between an object and the environment, where Newton’s laws dictate the object’s motion and collisions over time. Each collision causes the object to vibrate in certain patterns, changing the air pressure around its surface. These vibrations propagate in air to the recorder and create the sound of this physical process. Rigid Body Simulation Given an object’s 3D position and orientation, and its mass and restitution, a physics engine can simulate the physical processes and output the object’s position, orientation, and collision information over time. Our implementation uses an open-source physics engine, Bullet [Coumans, 2010]. We use a time step of 1/300 second to ensure simulation accuracy. At each time step, we record the 3D pose and position of the object, as well as the location, magnitude, and direction of collisions. The sound made by the object can then be approximated by accumulating sounds caused by those discrete impulse collisions on its surface. Audio Synthesis The audio synthesis procedure is built upon previous work on simulating realistic sounds [James et al., 2006, Bonneel et al., 2008, O’Brien et al., 2001]. To facilitate fast synthesis, this process is decomposed into two modules, one offline and one online. The offline part first uses finite element methods (FEM) to obtain the object’s vibration modes, which depend on the shape and Young’s modulus of the object. These vibration modes are then used as Neumann boundary conditions of the Helmholtz equation, which can be solved using boundary element methods (BEM). We use the techniques proposed by James et al. [2006] to approximate the solution by modeling the pressure fields with a sparse set of vibrating points. Note that the computation above only depends on object’s intrinsic properties such as shape and Young’s modulus, but not on the extrinsics such as its position and velocity. This allows us to pre-compute a number of shape-modulus configurations before simulation; only minimal computation is needed during the online simulation. The online part of the audio engine loads pre-computed approximations and decomposes impulses on the surface mesh of the object into its modal bases. At the observation point, the engine measures the pressure changes induced by vibrations in each mode, and sums them up to produce the simulated sound. An evaluation of the fidelity of these simulations can be found in Zhang et al. [2017]. 3.2 Accelerating Audio Synthesis Analysis-by-synthesis inference requires the audio engine to be highly efficient; however, a straightforward implementation of the above simulation procedure would be computationally expensive. We therefore present technical innovations to accelerate the computation to near real-time. First, we select the most significant modes excited by each impulse until their total energy reaches 90% of the energy of the impulse. Ignoring sound components generated by the less significant modes reduces the computational time by about 50%. Second, we stop the synthesis process if the amplitude of the damped sound goes below a certain threshold, since it is unlikely to be heard. Third, we parallelize the synthesis process by tackling collisions separately, so that each can computed on an independent thread. We then integrate them into a shared buffer to generate the final audio according to their timestamps. The effect of acceleration is shown in Table 1. Online sound synthesis only contains variables that are fully decoupled from the offline stage, which enables us to freely manipulate other variables with little computational cost during simulation. 3.3 Generating Stimuli Because real audio recordings with rich labels are hard to acquire, we synthesize random audio clips using our physics-based simulation to evaluate our models. Specifically, we focus on a single scenario — shape primitives falling onto the ground. We first construct an audio dataset that includes 14 primitives (some shown in Table 2), each with 10 different specific moduli (defined as Young’s modulus over density). After pre-computing their space-modulus configurations, we can generate synthetic audio clips in a near-real-time fashion. Because the process of objects falling onto the ground is relatively fast, we set the total simulation time of each scenario to 3 seconds. Details of our setup can be found in Table 2. 4 Inference In this section, we investigate four models for inferring object properties, each corresponding to a different training condition. Inspired by how humans can infer scene information using a mental physics engine [Battaglia et al., 2013, Sanborn et al., 2013], we start from an unsupervised model where the input is only one single test case with no annotation. We adopt Gibbs sampling over latent variables to find the combination that best reproduces the given audio. We then extend the model to include a deep neural network, analogous to what humans may learn from their past experience. The network is trained using labels inferred by the unsupervised model. During inference, the sampling algorithm uses the network prediction as the initialization. This self-supervised learning paradigm speeds-up convergence. We also investigate a third case, when labels can be acquired but are extremely coarse. We first train a recognition model with weak labels, then randomly pick candidates from those labels as an initialization for our analysis-by-synthesis inference. Lastly, to understand performance limits, we train a deep neural network with fully labeled data that yields the upper-bound performance. 4.1 Models Unsupervised Given an audio clip S, we would like to recover the latent variables x to make the reproduced sound g(x) most similar to S. Let L(·, ·) be a likelihood function that measures the perceptual distance between two sounds, then our goal is to maximize L(g(x), S). We denote L(g(x), S) as p(x) for brevity. In order to find x that maximizes p(x), p(x) can be treated as an distribution p̂(x) scaled by an unknown partition function Z. Since we do not have an exact form for p(·), nor p̂(x), we apply Gibbs sampling to draw samples from p(x). Specifically, at sweep round t, we update each variable xi by drawing samples from p̂(xi|xt1, xt2, ...xti−1, xt−1i+1, ...x t−1 n ). (1) Such conditional probabilities are straightforward to approximate. For example, to sample Young’s modulus conditioned on other variables, we can use the spectrogram as a feature and measure the l2 distance between the spectrograms of two sounds, because Young’s modulus will only affect the frequency at each collision. Indeed, we can use the spectrogram as features for all variables except height. Since the height can be inferred from the time of the first collision, a simple likelihood function can be designed as measuring the time difference between the first impact in two sounds. Note that this is only an approximate measure: object’s shape and orientation also affect, although only slightly, the time of first impact. To sample from the conditional probabilities, we adopt the Metropolis–Hastings algorithm, where samples are drawn from a Gaussian distribution and are accepted by flipping a biased coin according to its likelihood compared with the previous sample. Specifically, we calculate the l2 distance dt in feature space between g(xt) and S. For a new sample xt+1, we also calculate the l2 distance dt+1 in feature space between g(xt+1) and S. The new sample is accepted if dt+1 is smaller than dt; otherwise, xt+1 is accepted with probability exp(−(dt+1 − dt)/T ), where T is a time varying function inspired by simulated annealing algorithm. In our implementation, T is set as a quadratic function of the current MCMC sweep number t. Self-supervised Learning To accelerate the above sampling process, we propose a self-supervised model, which is analogous to a Helmholtz machine trained by the wake-sleep algorithm. We first train a deep neural network, whose labels are generated by the unsupervised inference model suggested above for a limited number of iterations. For a new audio clip, our self-supervised model uses the result from the neural network as an initialization, and then runs our analysis-by-synthesis algorithm to refine the inference. By making use of the past experiences which trained the network, the sampling process starts from a better position and requires fewer iterations to converge than the unsupervised model. Weakly-supervised Learning We further investigate the case where weak supervision might be helpful for accelerating the inference process. Since the latent variables we aim to recover are hard to obtain in real world settings, it is more realistic to assume that we could acquire very coarse labels, such as the type of material, rough attributes of the object’s shape, the height of the fall, etc. Based on such assumptions, we coarsen ground truth labels for all variables. For our primitive shapes, three attributes are defined, namely “with edge,” “with curved surface,” and “pointy.” For material parameters, i.e., specific modulus, Rayleigh damping coefficients and restitution, they are mapped to steel, ceramic, polystyrene and wood by finding the nearest neighbor to those real material parameters. Height is divided into “low” and “high” categories. A deep convolutional neural network is trained on our synthesized dataset with coarse labels. As shown in Figure 4, even trained using coarse labels, our network learns features very similar to the ones learned by the fully supervised network. To go beyond coarse labels, the unsupervised model is applied using the initialization suggested by the neural network. Fully-supervised Learning To explore the performance upper bound of the inference tasks, we train an oracle model with ground truth labels. To visualize the abstraction and characteristic features learned by the oracle model, we plot the inputs that maximally activate some hidden units in the last layer of the network. Figure 4 illustrates some of the most interesting waveforms. A selection of them learned to recognize specific temporal patterns, and others were sensitive to specific frequencies. Similar patterns were found in the weakly and fully supervised models. 4.2 Contrasting Model Performance We evaluate how well our model performs under different settings, studying how past experience or coarse labeling can improve the unsupervised results. We first present the implementation details of all four models, then compare their results on all inference tasks. Sampling Setup We perform 80 sweeps of MCMC sampling over all the 7 latent variables; for every sweep, each variable is sampled twice. Shape, specific modulus and rotation are sampled by uniform distributions across their corresponding dimensions. For other continuous variables, we define an auxiliary Gaussian variable xi ∼ N (µi, σ2i ) for sampling, where the mean µi is based on the current state. To evaluate the likelihood function between the input and the sampled audio (both with sample rate of 44.1k), we compute the spectrogram of the signal using a Tukey window of length 5,000 with a 2,000 sample overlap. For each window, a 10,000 point Fourier transform is applied. Deep Learning Setup Our fully supervised and self-supervised recognition models use the architecture of SoundNet-8 [Aytar et al., 2016] as Figure 3, which takes an arbitrarily long raw audio wave as an input, and produces a 1024-dim feature vector. We append to that a fully connected layer to produce a 28-dim vector as the final output of the neural network. The first 14 dimensions are the one-hot encoding of primitive shapes and the next 10 dimensions are encodings of the specific modulus. The last 4 dimensions regress the initial height, the two Rayleigh damping coefficients and the restitution respectively. All the regression dimensions are normalized to a [−1, 1] range. The weakly supervised model preserves the structure of the fully supervised one, but with an 8-dim final output: 3 for shape attributes, 1 for height, and 4 for materials. We used stochastic gradient descent for training, with a learning rate of 0.001, a momentum of 0.9 and a batch size of 16. Mean Square Error(MSE) loss is used for back-propagation. We implemented our framework in Torch7 [Collobert et al., 2011], and trained all models from scratch. Results Results for the four inference models proposed above are shown in Table 3. For shapes and specific modulus, we evaluate the results as classification accuracies; for height, Rayleigh damping coefficients, and restitution, results are evaluated as MSE. Before calculating MSE, we normalize values of each latent variable to [−1, 1] interval, so that the MSE score is comparable across variables. From Table 3, we can conclude that self-supervised and weakly supervised models benefit from the better initialization to the analysis-by-synthesis algorithm, especially on the last four continuous latent variables. One can also observe that final inference accuracies and MSEs are marginally better than for the unsupervised case. To illustrate the rate of convergence, we plot the likelihood value, exp(−kd) where d is the distance of sound features, along iterations of MCMC in Figure 5. The mean curve of self-supervised model meets our expectation, i.e., it converges much faster than the unsupervised model, and reaches a slightly higher likelihood at the end of 30 iterations. The fully supervised model, which is trained on 200,000 audios with the full set of ground-truth labels, yields near-perfect results for all latent variables. 5 Evaluations We first evaluate the performance of our inference procedure by comparing its performance with humans. The evaluation is conducted using synthetic audio with their ground truth labels. Then, we investigate whether our inference algorithm performs well on real-world recordings. Given recorded audio, our algorithm can distinguish the shape from a set of candidates. 5.1 Human Studies We seek to evaluate our model relative to human performance. We designed three tasks for our subjects: inferring the object’s shape, material and height-of-fall from the sound, intuitive attributes when hearing an object fall. Those tasks are designed to be classification problems, where the labels are in accordance with coarse labels used by our weakly-supervised model. The study was conducted on Amazon Mechanical Turk. For each experiment (shape, material, height), we randomly selected 52 test cases. Before answering test questions, the subject is shown 4 training examples with ground truth as familiarization of the setup. We collected 192 responses for the experiment on inferring shape, 566 for material, and 492 for height, resulting in a total of 1,250 responses. Inferring Shapes After becoming familiar with the experiment, participants are asked to make three binary judgments about the shape by listening to our synthesized audio clip. Prior examples are given for people to understand the distinctions of “with edge,” “with curved surface,” and “pointy” attributes. As shown in Figure 6, humans are relatively good at recognizing shape attributes from sound and are around the same level of competency when the unsupervised algorithm runs for 10∼30 iterations. Inferring Materials We sampled audio clips whose physical properties – density, Young’s modulus and damping coefficients – are in the vicinity of true parameters of steel, ceramic, polystyrene and wood. Participants are required to choose one out of four possible materials. However, it can still be challenging to distinguish between materials, especially when sampled ones have similar damping and specific modulus. Our algorithm confuses steel with ceramic and ceramic with polystyrene occasionally, which is in accordance with human performance, as shown in Figure 5. Inferring Heights In this task, we ask participants to choose whether the object is dropped from a high position or a low one. We provided example videos and audios to help people anchor reference height. Under our scene setup, the touchdown times of the two extremes of the height range differ by 0.2s. To address the potential bias that algorithms may be better at exploiting falling time, we explicitly told humans that the silence at the beginning is informative. Second, we make sure that the anchoring example is always available during the test, which participants can always compare and refer to. Third, the participant has to play each test clip manually, and therefore has control over when the audio begins. Last, we tested on different object shapes. Because the time of first impact is shape-dependent, differently shaped objects dropped from the same height would have first impacts at different times, making it harder for the machine to exploit the cue. 5.2 Transferring to Real Scenes In addition to the synthetic data, we designed real world experiments to test our unsupervised model. We select three candidate shapes: tetrahedron, octahedron, and dodecahedron. We record the sound a metal octahedron dropping on a table and used our unsupervised model to recover the latent variables. Because the real world scenarios may introduce highly complex factors that cannot be exactly modeled in our simulation, a more robust feature and a metric are needed. For every audio clip, we use its temporal energy distribution as the feature, which is derived from spectrogram. A window of 2,000 samples with a 1,500 sample overlap is used to calculate the energy distribution. Then, we use the earth mover’s distance (EMD) [Rubner et al., 2000] as the metric, which is a natural choice for measuring distances between distributions. The inference result is illustrated in Figure 7. Using the energy distribution with EMD distance measure, our generated sound aligns its energy at major collision events with the real audio, which greatly reduces ambiguities among the three candidate shapes. We also provide our normalized likelihood function overtime to show our sampling has converged to produce highly probable samples. 6 Conclusion In this paper, we propose a novel model for estimating physical properties of objects from auditory inputs, by incorporating the feedback of an efficient audio synthesis engine in the loop. We demonstrate the possibility of accelerating inference with fast recognition models. We compare our model predictions with human responses on a variety of judgment tasks and demonstrate the correlation between human responses and model estimates. We also show that our model generalizes to some real data. Acknowledgements The authors would like to thank Changxi Zheng, Eitan Grinspun, and Josh H. McDermott for helpful discussions. This work is supported by NSF #1212849 and #1447476, ONR MURI N00014-16-12007, Toyota Research Institute, Samsung, Shell, and the Center for Brain, Minds and Machines (NSF STC award CCF-1231216).
1. What is the main contribution of the paper in terms of the proposed system? 2. What are the strengths of the paper regarding its documentation and experimental setup? 3. Do you have any concerns about the lack of significant machine learning contributions? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any questions or concerns regarding the comparison with previous systems or literature?
Review
Review This paper presents a system to infer shape and material of falling objects from sound. The main system follows the generative approach: its pipeline consists of a physics engine for simulating rigid body dynamics (given latent variables that characterize various features of the object) and an audio synthesis system. Four different inference models are explored in the paper ranging from an unsupervised scheme, in which a Gibbs sampler is used to infer the latent variables that give rise to a given test audio clip; to a fully-supervised model in which a oracle model is trained from the ground-truth data is obtained from the generative model. Finally, a study was conducted to compare the performance on object recognition between human and the inference model, showing that the latter performs comparably/better. PROS: - The first pages are relatively well written. Both the architecture and the experimental setups are well documented. - The system reaches (super) human-level performance in shape and material recognition from (synthetic) sounds. - The relation to previous literature is well documented. CONS: - Although the paper tackles a hard task using a sophisticated system, there is little to no significant ML contribution. - Many typos. Paper gets quite confusing towards the last pages and feels like having been written up in a rush. - There is no comparison to previous systems.
NIPS
Title Shape and Material from Sound Abstract Hearing an object falling onto the ground, humans can recover rich information including its rough shape, material, and falling height. In this paper, we build machines to approximate such competency. We first mimic human knowledge of the physical world by building an efficient, physics-based simulation engine. Then, we present an analysis-by-synthesis approach to infer properties of the falling object. We further accelerate the process by learning a mapping from a sound wave to object properties, and using the predicted values to initialize the inference. This mapping can be viewed as an approximation of human commonsense learned from past experience. Our model performs well on both synthetic audio clips and real recordings without requiring any annotated data. We conduct behavior studies to compare human responses with ours on estimating object shape, material, and falling height from sound. Our model achieves near-human performance. 1 Introduction From a short audio clip of interacting objects, humans can recover the number of objects involved, as well as their materials and surface smoothness [Zwicker and Fastl, 2013, Kunkler-Peck and Turvey, 2000, Siegel et al., 2014]. How does our cognitive system recover so much content from so little? What is the role of past experience in understanding auditory data? For physical scene understanding from visual input, recent behavioral and computational studies suggest that human judgments can be well explained as approximate, probabilistic simulations of a mental physics engine [Battaglia et al., 2013, Sanborn et al., 2013]. These studies suggest that the brain encodes rich, but noisy, knowledge of physical properties of objects and basic laws of physical interactions between objects. To understand, reason, and predict about a physical scene, humans seem to rely on simulations from this mental physics engine. In this paper, we develop a computational system to interpret audio clips of falling objects, inspired by the idea that humans may use a physics engine as part of a generative model to understand the physical world. Our generative model has three components. The first is a object representation that includes its 3D shape, position in space, and physical properties such as mass, Young’s modulus, Rayleigh damping coefficients, and restitution. We aim to infer all these attributes from auditory inputs. The second component is an efficient, physics-based audio synthesis engine. Given an initial scene setup and object properties, the engine simulates the object’s motion and generates its trajectory using rigid body physics. It also produces the corresponding collision profile — when, where, and how collisions happen. The object’s trajectory and collision profile are then combined with its pre-computed sound statistics to generate the sound it makes during the physical event. With this efficient forward model, we can then infer object properties using analysis-by-synthesis; for each audio clip, we want to find a set of latent variables that best reproduce it. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. The third component of the model is therefore a likelihood function that measures the perceptual distance between two sounds. Designing such a likelihood function is typically challenging; however, we observe that features like spectrogram are effective when latent variables have limited degrees of freedom. This motivates us to infer latent variables via methods like Gibbs sampling, where we focus on approximating the conditional probability of a single variable given the others. The inference procedure can be further accelerated with a self-supervised learning paradigm inspired by the wake/sleep phases in Helmholtz machines [Dayan et al., 1995]. We train a deep neural network as the recognition model to regress object properties from sound, where training data are generated using our inference algorithm. Then, for any future audio clip, the output of the recognition model can be used as a good initialization for the sampling algorithm to converge faster. We evaluate our models on a range of perception tasks: inferring object shape, material, and initial height from sound. We also collect human responses for each task and compare them with model estimates. Our results indicate that first, humans are quite successful in these tasks; second, our model not only closely matches human successes, but also makes similar errors as humans do. For these quantitative evaluations, we have mostly used synthetic data, where ground truth labels are available. We further evaluate the model on recordings to demonstrate that it also performs well on real-world audios. We make three contributions in this paper. First, we propose a novel model for estimating physical object properties from auditory inputs by incorporating the feedback of a physics engine and an audio engine into the inference process. Second, we incorporate a deep recognition network with the generative model for more efficient inference. Third, we evaluate our model and compare it to humans on a variety of judgment tasks, and demonstrate the correlation between human responses and model estimates. 2 Related Work Human visual and auditory perception Psychoacoustics researchers have explored how humans can infer object properties, including shape, material and size, from audio in the past decades [Zwicker and Fastl, 2013, Kunkler-Peck and Turvey, 2000, Rocchesso and Fontana, 2003, Klatzky et al., 2000, Siegel et al., 2014]. Recently, McDermott et al. [2013] proposed compact sound representations that capture semantic information and are informative of human auditory perception. Sound simulation Our sound synthesis engine builds upon and extends existing sound simulation systems in computer graphics and computer vision [O’Brien et al., 2001, 2002, James et al., 2006, Bonneel et al., 2008, Van den Doel and Pai, 1998, Zhang et al., 2017]. Van den Doel and Pai [1998] simulated object vibration using the finite element method and approximated the vibrating object as a single point source. O’Brien et al. [2001, 2002] used the Rayleigh method to approximate wave equation solutions for better synthesis quality. James et al. [2006] proposed to solve Helmholtz equations using the Boundary Element Method, where each object’s vibration mode is approximated by a set of vibrating points. Recently, Zhang et al. [2017] built a framework for synthesizing largescale audio-visual data. In this paper, we accelerate the framework by Zhang et al. [2017] to achieve near real-time rendering, and explore learning object representations from sound with the synthesis engine in the loop. Physical Object Perception There has been a growing interest in understanding physical object properties, like mass and friction, from visual input or scene dynamics [Chang et al., 2017, Battaglia et al., 2016, Wu et al., 2015, 2016, 2017]. Much of the existing research has focused on inferring object properties from visual data. Recently, researchers have begun to explore learning object representations from sound. Owens et al. [2016a] attempted to infer material properties from audio, focusing on the scenario of hitting objects with a drumstick. Owens et al. [2016b] further demonstrated audio signals can be used as supervision on learning object concepts from visual data, and Aytar et al. [2016] proposed to learn sound representations from corresponding video frames. Zhang et al. [2017] discussed the complementary role of auditory and visual data in recovering both geometric and physical object properties. In this paper, we learn physical object representations from audio through a combination of powerful deep recognition models and analysis-by-synthesis inference methods. Analysis-by-synthesis Our framework also relates to the field of analysis-by-synthesis, or generative models with data-driven proposals [Yuille and Kersten, 2006, Zhu and Mumford, 2007, Wu et al., 2015], as we are incorporating a graphics engine as a black-box synthesizer. Unlike earlier methods that focus mostly on explaining visual data, our work aims to infer latent parameters from auditory data. Please refer to Bever and Poeppel [2010] for a review of analysis-by-synthesis methods. 3 An Efficient, Physics-Based Audio Engine At the core of our inference pipeline is an efficient audio synthesis engine. In this section, we first give a brief overview of existing synthesis engines, and then present our technical innovations on accelerating them for real-time rendering in our inference algorithm. 3.1 Audio Synthesis Engine Audio synthesis engines generate realistic sound by simulating physics. First, rigid body simulation produces the interaction between an object and the environment, where Newton’s laws dictate the object’s motion and collisions over time. Each collision causes the object to vibrate in certain patterns, changing the air pressure around its surface. These vibrations propagate in air to the recorder and create the sound of this physical process. Rigid Body Simulation Given an object’s 3D position and orientation, and its mass and restitution, a physics engine can simulate the physical processes and output the object’s position, orientation, and collision information over time. Our implementation uses an open-source physics engine, Bullet [Coumans, 2010]. We use a time step of 1/300 second to ensure simulation accuracy. At each time step, we record the 3D pose and position of the object, as well as the location, magnitude, and direction of collisions. The sound made by the object can then be approximated by accumulating sounds caused by those discrete impulse collisions on its surface. Audio Synthesis The audio synthesis procedure is built upon previous work on simulating realistic sounds [James et al., 2006, Bonneel et al., 2008, O’Brien et al., 2001]. To facilitate fast synthesis, this process is decomposed into two modules, one offline and one online. The offline part first uses finite element methods (FEM) to obtain the object’s vibration modes, which depend on the shape and Young’s modulus of the object. These vibration modes are then used as Neumann boundary conditions of the Helmholtz equation, which can be solved using boundary element methods (BEM). We use the techniques proposed by James et al. [2006] to approximate the solution by modeling the pressure fields with a sparse set of vibrating points. Note that the computation above only depends on object’s intrinsic properties such as shape and Young’s modulus, but not on the extrinsics such as its position and velocity. This allows us to pre-compute a number of shape-modulus configurations before simulation; only minimal computation is needed during the online simulation. The online part of the audio engine loads pre-computed approximations and decomposes impulses on the surface mesh of the object into its modal bases. At the observation point, the engine measures the pressure changes induced by vibrations in each mode, and sums them up to produce the simulated sound. An evaluation of the fidelity of these simulations can be found in Zhang et al. [2017]. 3.2 Accelerating Audio Synthesis Analysis-by-synthesis inference requires the audio engine to be highly efficient; however, a straightforward implementation of the above simulation procedure would be computationally expensive. We therefore present technical innovations to accelerate the computation to near real-time. First, we select the most significant modes excited by each impulse until their total energy reaches 90% of the energy of the impulse. Ignoring sound components generated by the less significant modes reduces the computational time by about 50%. Second, we stop the synthesis process if the amplitude of the damped sound goes below a certain threshold, since it is unlikely to be heard. Third, we parallelize the synthesis process by tackling collisions separately, so that each can computed on an independent thread. We then integrate them into a shared buffer to generate the final audio according to their timestamps. The effect of acceleration is shown in Table 1. Online sound synthesis only contains variables that are fully decoupled from the offline stage, which enables us to freely manipulate other variables with little computational cost during simulation. 3.3 Generating Stimuli Because real audio recordings with rich labels are hard to acquire, we synthesize random audio clips using our physics-based simulation to evaluate our models. Specifically, we focus on a single scenario — shape primitives falling onto the ground. We first construct an audio dataset that includes 14 primitives (some shown in Table 2), each with 10 different specific moduli (defined as Young’s modulus over density). After pre-computing their space-modulus configurations, we can generate synthetic audio clips in a near-real-time fashion. Because the process of objects falling onto the ground is relatively fast, we set the total simulation time of each scenario to 3 seconds. Details of our setup can be found in Table 2. 4 Inference In this section, we investigate four models for inferring object properties, each corresponding to a different training condition. Inspired by how humans can infer scene information using a mental physics engine [Battaglia et al., 2013, Sanborn et al., 2013], we start from an unsupervised model where the input is only one single test case with no annotation. We adopt Gibbs sampling over latent variables to find the combination that best reproduces the given audio. We then extend the model to include a deep neural network, analogous to what humans may learn from their past experience. The network is trained using labels inferred by the unsupervised model. During inference, the sampling algorithm uses the network prediction as the initialization. This self-supervised learning paradigm speeds-up convergence. We also investigate a third case, when labels can be acquired but are extremely coarse. We first train a recognition model with weak labels, then randomly pick candidates from those labels as an initialization for our analysis-by-synthesis inference. Lastly, to understand performance limits, we train a deep neural network with fully labeled data that yields the upper-bound performance. 4.1 Models Unsupervised Given an audio clip S, we would like to recover the latent variables x to make the reproduced sound g(x) most similar to S. Let L(·, ·) be a likelihood function that measures the perceptual distance between two sounds, then our goal is to maximize L(g(x), S). We denote L(g(x), S) as p(x) for brevity. In order to find x that maximizes p(x), p(x) can be treated as an distribution p̂(x) scaled by an unknown partition function Z. Since we do not have an exact form for p(·), nor p̂(x), we apply Gibbs sampling to draw samples from p(x). Specifically, at sweep round t, we update each variable xi by drawing samples from p̂(xi|xt1, xt2, ...xti−1, xt−1i+1, ...x t−1 n ). (1) Such conditional probabilities are straightforward to approximate. For example, to sample Young’s modulus conditioned on other variables, we can use the spectrogram as a feature and measure the l2 distance between the spectrograms of two sounds, because Young’s modulus will only affect the frequency at each collision. Indeed, we can use the spectrogram as features for all variables except height. Since the height can be inferred from the time of the first collision, a simple likelihood function can be designed as measuring the time difference between the first impact in two sounds. Note that this is only an approximate measure: object’s shape and orientation also affect, although only slightly, the time of first impact. To sample from the conditional probabilities, we adopt the Metropolis–Hastings algorithm, where samples are drawn from a Gaussian distribution and are accepted by flipping a biased coin according to its likelihood compared with the previous sample. Specifically, we calculate the l2 distance dt in feature space between g(xt) and S. For a new sample xt+1, we also calculate the l2 distance dt+1 in feature space between g(xt+1) and S. The new sample is accepted if dt+1 is smaller than dt; otherwise, xt+1 is accepted with probability exp(−(dt+1 − dt)/T ), where T is a time varying function inspired by simulated annealing algorithm. In our implementation, T is set as a quadratic function of the current MCMC sweep number t. Self-supervised Learning To accelerate the above sampling process, we propose a self-supervised model, which is analogous to a Helmholtz machine trained by the wake-sleep algorithm. We first train a deep neural network, whose labels are generated by the unsupervised inference model suggested above for a limited number of iterations. For a new audio clip, our self-supervised model uses the result from the neural network as an initialization, and then runs our analysis-by-synthesis algorithm to refine the inference. By making use of the past experiences which trained the network, the sampling process starts from a better position and requires fewer iterations to converge than the unsupervised model. Weakly-supervised Learning We further investigate the case where weak supervision might be helpful for accelerating the inference process. Since the latent variables we aim to recover are hard to obtain in real world settings, it is more realistic to assume that we could acquire very coarse labels, such as the type of material, rough attributes of the object’s shape, the height of the fall, etc. Based on such assumptions, we coarsen ground truth labels for all variables. For our primitive shapes, three attributes are defined, namely “with edge,” “with curved surface,” and “pointy.” For material parameters, i.e., specific modulus, Rayleigh damping coefficients and restitution, they are mapped to steel, ceramic, polystyrene and wood by finding the nearest neighbor to those real material parameters. Height is divided into “low” and “high” categories. A deep convolutional neural network is trained on our synthesized dataset with coarse labels. As shown in Figure 4, even trained using coarse labels, our network learns features very similar to the ones learned by the fully supervised network. To go beyond coarse labels, the unsupervised model is applied using the initialization suggested by the neural network. Fully-supervised Learning To explore the performance upper bound of the inference tasks, we train an oracle model with ground truth labels. To visualize the abstraction and characteristic features learned by the oracle model, we plot the inputs that maximally activate some hidden units in the last layer of the network. Figure 4 illustrates some of the most interesting waveforms. A selection of them learned to recognize specific temporal patterns, and others were sensitive to specific frequencies. Similar patterns were found in the weakly and fully supervised models. 4.2 Contrasting Model Performance We evaluate how well our model performs under different settings, studying how past experience or coarse labeling can improve the unsupervised results. We first present the implementation details of all four models, then compare their results on all inference tasks. Sampling Setup We perform 80 sweeps of MCMC sampling over all the 7 latent variables; for every sweep, each variable is sampled twice. Shape, specific modulus and rotation are sampled by uniform distributions across their corresponding dimensions. For other continuous variables, we define an auxiliary Gaussian variable xi ∼ N (µi, σ2i ) for sampling, where the mean µi is based on the current state. To evaluate the likelihood function between the input and the sampled audio (both with sample rate of 44.1k), we compute the spectrogram of the signal using a Tukey window of length 5,000 with a 2,000 sample overlap. For each window, a 10,000 point Fourier transform is applied. Deep Learning Setup Our fully supervised and self-supervised recognition models use the architecture of SoundNet-8 [Aytar et al., 2016] as Figure 3, which takes an arbitrarily long raw audio wave as an input, and produces a 1024-dim feature vector. We append to that a fully connected layer to produce a 28-dim vector as the final output of the neural network. The first 14 dimensions are the one-hot encoding of primitive shapes and the next 10 dimensions are encodings of the specific modulus. The last 4 dimensions regress the initial height, the two Rayleigh damping coefficients and the restitution respectively. All the regression dimensions are normalized to a [−1, 1] range. The weakly supervised model preserves the structure of the fully supervised one, but with an 8-dim final output: 3 for shape attributes, 1 for height, and 4 for materials. We used stochastic gradient descent for training, with a learning rate of 0.001, a momentum of 0.9 and a batch size of 16. Mean Square Error(MSE) loss is used for back-propagation. We implemented our framework in Torch7 [Collobert et al., 2011], and trained all models from scratch. Results Results for the four inference models proposed above are shown in Table 3. For shapes and specific modulus, we evaluate the results as classification accuracies; for height, Rayleigh damping coefficients, and restitution, results are evaluated as MSE. Before calculating MSE, we normalize values of each latent variable to [−1, 1] interval, so that the MSE score is comparable across variables. From Table 3, we can conclude that self-supervised and weakly supervised models benefit from the better initialization to the analysis-by-synthesis algorithm, especially on the last four continuous latent variables. One can also observe that final inference accuracies and MSEs are marginally better than for the unsupervised case. To illustrate the rate of convergence, we plot the likelihood value, exp(−kd) where d is the distance of sound features, along iterations of MCMC in Figure 5. The mean curve of self-supervised model meets our expectation, i.e., it converges much faster than the unsupervised model, and reaches a slightly higher likelihood at the end of 30 iterations. The fully supervised model, which is trained on 200,000 audios with the full set of ground-truth labels, yields near-perfect results for all latent variables. 5 Evaluations We first evaluate the performance of our inference procedure by comparing its performance with humans. The evaluation is conducted using synthetic audio with their ground truth labels. Then, we investigate whether our inference algorithm performs well on real-world recordings. Given recorded audio, our algorithm can distinguish the shape from a set of candidates. 5.1 Human Studies We seek to evaluate our model relative to human performance. We designed three tasks for our subjects: inferring the object’s shape, material and height-of-fall from the sound, intuitive attributes when hearing an object fall. Those tasks are designed to be classification problems, where the labels are in accordance with coarse labels used by our weakly-supervised model. The study was conducted on Amazon Mechanical Turk. For each experiment (shape, material, height), we randomly selected 52 test cases. Before answering test questions, the subject is shown 4 training examples with ground truth as familiarization of the setup. We collected 192 responses for the experiment on inferring shape, 566 for material, and 492 for height, resulting in a total of 1,250 responses. Inferring Shapes After becoming familiar with the experiment, participants are asked to make three binary judgments about the shape by listening to our synthesized audio clip. Prior examples are given for people to understand the distinctions of “with edge,” “with curved surface,” and “pointy” attributes. As shown in Figure 6, humans are relatively good at recognizing shape attributes from sound and are around the same level of competency when the unsupervised algorithm runs for 10∼30 iterations. Inferring Materials We sampled audio clips whose physical properties – density, Young’s modulus and damping coefficients – are in the vicinity of true parameters of steel, ceramic, polystyrene and wood. Participants are required to choose one out of four possible materials. However, it can still be challenging to distinguish between materials, especially when sampled ones have similar damping and specific modulus. Our algorithm confuses steel with ceramic and ceramic with polystyrene occasionally, which is in accordance with human performance, as shown in Figure 5. Inferring Heights In this task, we ask participants to choose whether the object is dropped from a high position or a low one. We provided example videos and audios to help people anchor reference height. Under our scene setup, the touchdown times of the two extremes of the height range differ by 0.2s. To address the potential bias that algorithms may be better at exploiting falling time, we explicitly told humans that the silence at the beginning is informative. Second, we make sure that the anchoring example is always available during the test, which participants can always compare and refer to. Third, the participant has to play each test clip manually, and therefore has control over when the audio begins. Last, we tested on different object shapes. Because the time of first impact is shape-dependent, differently shaped objects dropped from the same height would have first impacts at different times, making it harder for the machine to exploit the cue. 5.2 Transferring to Real Scenes In addition to the synthetic data, we designed real world experiments to test our unsupervised model. We select three candidate shapes: tetrahedron, octahedron, and dodecahedron. We record the sound a metal octahedron dropping on a table and used our unsupervised model to recover the latent variables. Because the real world scenarios may introduce highly complex factors that cannot be exactly modeled in our simulation, a more robust feature and a metric are needed. For every audio clip, we use its temporal energy distribution as the feature, which is derived from spectrogram. A window of 2,000 samples with a 1,500 sample overlap is used to calculate the energy distribution. Then, we use the earth mover’s distance (EMD) [Rubner et al., 2000] as the metric, which is a natural choice for measuring distances between distributions. The inference result is illustrated in Figure 7. Using the energy distribution with EMD distance measure, our generated sound aligns its energy at major collision events with the real audio, which greatly reduces ambiguities among the three candidate shapes. We also provide our normalized likelihood function overtime to show our sampling has converged to produce highly probable samples. 6 Conclusion In this paper, we propose a novel model for estimating physical properties of objects from auditory inputs, by incorporating the feedback of an efficient audio synthesis engine in the loop. We demonstrate the possibility of accelerating inference with fast recognition models. We compare our model predictions with human responses on a variety of judgment tasks and demonstrate the correlation between human responses and model estimates. We also show that our model generalizes to some real data. Acknowledgements The authors would like to thank Changxi Zheng, Eitan Grinspun, and Josh H. McDermott for helpful discussions. This work is supported by NSF #1212849 and #1447476, ONR MURI N00014-16-12007, Toyota Research Institute, Samsung, Shell, and the Center for Brain, Minds and Machines (NSF STC award CCF-1231216).
1. What is the focus of the paper regarding object properties and bouncing surfaces? 2. What are the strengths of the proposed approach, particularly in its simplicity and performance? 3. What are the weaknesses of the paper, especially regarding the comparison methods and experimental designs? 4. How does the reviewer assess the clarity and quality of the paper's content?
Review
Review This paper describes an analysis-by-synthesis system for identifying properties of objects bouncing on a surface including material, shape, and height of fall. A thorough investigation is conducted on synthetic data using various constraints on the training of the model, showing that it is accurately able to recover these basic facts from new recordings. An experiment on real data comparing human listeners with the system shows that it is comparable and may outperform the humans in the height task. The system uses a surprisingly simple comparison method between the observed and synthesized signals, but it appears to work well. The literature review is short, but quite good. The paper has several minor weaknesses: * It is not clear if the ability of the model to detect fall height is because of the absolute timing of the simulations. Falling from a greater height leads to a longer delay before the first impact. This is obvious to an algorithm analyzing fixed-sized wav files, but not to a human listening to sound files with somewhat unknown silent beginnings. A fairer comparison would be to add a random amount of delay before starting the sounds for both listeners. * The comparison method is changed between the synthetic and real tasks, which seems unfair. If it is necessary to use a more complex comparison method for the real task, then also use it for the synthetic one. * Line 226 reports several analysis parameters in samples, but never states the sample rate. Please describe these quantities in seconds or ms or provide the sample rate so the reader can perform the conversion themselves. Overall, this is a strong paper that has gotten a relatively old and appealing idea to work much better than in the past.
NIPS
Title Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy, Iterative Algorithms Abstract The information-theoretic framework of Russo and Zou (2016) and Xu and Raginsky (2017) provides bounds on the generalization error of a learning algorithm in terms of the mutual information between the algorithm’s output and the training sample. In this work, we study the proposal by Steinke and Zakynthinou (2020) to reason about the generalization error of a learning algorithm by introducing a super sample that contains the training sample as a random subset and computing mutual information conditional on the super sample. We first show that these new bounds based on the conditional mutual information are tighter than those based on the unconditional mutual information. We then introduce yet tighter bounds, building on the “individual sample” idea of Bu et al. (2019) and the “data dependent” ideas of Negrea et al. (2019), using disintegrated mutual information. Finally, we apply these bounds to the study of the Langevin dynamics algorithm, showing that conditioning on the super sample allows us to exploit information in the optimization trajectory to obtain tighter bounds based on hypothesis tests. 1 Introduction Let D be an unknown distribution on a space Z , and letW be a set of parameters that index a set of predictors, ` : Z ×W → [0,1] be a bounded loss function. Consider a (randomized) learning algorithm A that selects an element W inW , based on an IID sample S = (Z1, . . . ,Zn)∼D⊗n. For w ∈W , let RD(w) = E`(Z,w) denote the risk of predictor w, and R̂S(w) = 1n ∑ m i=1 `(Zi,w) denote the empirical risk. Our interest in this paper is the (expected) generalization error of A with respect to D, EGED(A) = E[RD(W )− R̂S(W )]. In this work, we study bounds on generalization error in terms of information-theoretic measures of dependence between the data and the output of the learning algorithm. This approach was initiated by Russo and Zou [18, 19] and has since been extended [2, 3, 6, 9, 17, 24]. The basic result in this line of work is that the generalization error can be bounded in terms of the mutual information I(W ;S) between the data and the learned parameter, a quantity that has been called the information usage or input–output mutual information of A with respect to D, which we denote by IOMID(A). The following result is due to Russo and Zou [18] and Xu and Raginsky [24]. Theorem 1.1. EGED(A)≤ √ IOMID(A) 2n . 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Theorem 1.1 formalizes the intuition that a learning algorithm without heavy dependence on the training set will generalize well. This result has been extended in many directions: Raginsky et al. [17] connect variants of IOMID(A) to different notions of stability. Asadi et al. [3] establish refined bounds using chaining techniques for subgaussian processes. Bu et al. [6] obtain a tighter bound by replacing IOMID(A) with the mutual information between W and a single training data point. Negrea et al. [15] propose variants that allow for data-dependent estimates. See also [1, 2, 4, 9, 10]. Our focus in this paper is on a new class of information-theoretic bounds on generalization error, proposed by Steinke and Zakynthinou [21]. Fix k ≥ 2, let [k] = {1, . . . ,k}, let U (k) = (U1, . . . ,Un) ∼ Unif([k]n), and let Z̃(k) ∼ D⊗(k×n) be a k×n array of IID random elements in Z , independent from U (k). Let S = ( ZU1,1, . . . ,ZUn,n ) and let W be a random element inW such that conditional on S, U (k), and Z̃(k), W has distribution A(S). It follows that, conditional on S, W is independent from U (k) and Z̃(k). By construction, the data set S is hidden inside the super sample; the indices U (k) specify where. Steinke and Zakynthinou [21] use these additional structures to define: Definition 1.2. The conditional mutual information of A w.r.t. D is CMIkD(A) = I(W ;U (k)|Z̃(k)). Intuitively, CMIkD(A) captures how well we can recognize which samples from the given supersample Z̃(k) were in the training set, given the learned parameters. This intuition and the connection of CMIkD(A) with the membership attack [20] can be formalized using Fano’s inequality, showing that CMIkD(A) can be used to lower bound the error of any estimator of U (k) given W and Z̃(k). (See Appendix A.) Steinke and Zakynthinou [21] connect CMIkD(A) with well-known notions in learning theory such as distributional stability, differential privacy, and VC dimension, and establish the following bound [21, Thm. 5.1] in the case k = 2, the extension to k ≥ 2 being straightforward: Theorem 1.3. EGED(A)≤ √ 2CMI kD(A) n . This paper improves our understanding of the framework introduced by Steinke and Zakynthinou [21], identifies tighter bounds, and applies these techniques to the analysis of a real algorithm. In Section 2, we present several formal connections between the two aforementioned informationtheoretic approaches for studying generalization. Our first result bridges IOMID(A) and CMIkD(A), showing that for any learning algorithm, any data distribution, and any k, CMIkD(A) is less that IOMID(A). We also show that CMIkD(A) converges to IOMID(A) as k→∞ when |W| is finite. In Section 3, we establish two novel bounds on generalization error using the random index and super sample structure of Steinke and Zakynthinou, and show that both our bounds are tighter than those based on CMIkD(A). Finally, in Section 4, we show how to construct generalization error bounds for noisy, iterative algorithms using the generalization bound proposed in Section 3. Using the Langevin dynamics algorithm as our example, we introduce a new type of prior for iterative algorithms that “learns” from the past trajectory, using a form of hypothesis testing, in order to not “pay” again for information obtained at previous iterations. Experiments show that our new bound is tighter than [14, 15], especially in the late stages of training, where the hypothesis test component of the bound discounts the contributions of new gradients. Our new bounds are non-vacuous for a great deal more epochs than related work, and do not diverge or exceed 1 even when severe overfitting occurs. 1.1 Contributions 1. We characterize the connections between the IOMID(A) and CMIkD(A). We show that CMIkD(A) is always less than the IOMID(A) for any data distribution, learning algorithms and k. Further, we prove that CMIkD(A) converges to IOMID(A) when k goes to infinity for finite parameter spaces. 2. We provide novel generalization bounds that relate generalization to the mutual information between learned parameters and a random subset of the random indices U1, . . .Un. 3. We apply our generalization bounds to the Langevin dynamics algorithm by constructing a specific generalized prior and posterior. We employ a generalized prior that learns about the values of the indices U from the optimization trajectory. To our knowledge, this is the first generalized prior that learns about the dataset from the iterates of the learning algorithm. 4. We show empirically that our bound on the expected generalization error of Langevin dynamics algorithm is tighter than other existing bounds in the literature. 1.2 Definitions from Probability and Information Theory Let S,T be measurable spaces, letM1(S) be the space of probability measures on S, and define a probability kernel from S to T to be a measurable map from S toM1(T ). For random elements X in S and Y in T , write P[X ] ∈M1(S) for the distribution of X and write PY [X ] for (a regular version of) the conditional distribution of X given Y , viewed as a σ(Y )-measurable random element inM1(S). Recall that PY [X ] is a regular version if, for some probability kernel κ from T to S, we have PY [X ] = κ(Y ) a.s. . If Y is σ(X)-measurable then Y is a function of X . If random measure, P, is σ(X)-measurable then the measure P is determined by X , but a random element Y with PX [Y ] = P is not X measurable unless it is degenerate. If X is a random variable, write EX for the expectation of X and write EY X or E[X |Y ] for (an arbitrary version of) the conditional expectation of X given Y , which is Y -measurable. For a random element X on S and a probability kernel P from S to T , the composition P(X) := P◦X is a σ(X)-measurable random measure of a random element taking values in T . We occasionally use this notation to refer to a kernel P implicitly by the way it acts on X . Let P, Q be probability measures on a measurable space S. For a P-integrable or nonnegative measurable function f , let P[ f ] = ∫ f dP. When Q is absolutely continuous with respect to P, denoted Q P, we write dQdP for the Radon–Nikodym derivative of Q with respect to P. We rely on several notions from information theory: The KL divergence of Q with respect to P, denoted KL(Q‖P), is Q[log dQdP ] when Q P and∞ otherwise. Let X , Y , and Z be random elements, and let⊗ form product measures. The mutual information between X and Y is I(X ;Y ) = KL(P[(X ,Y )]‖P[X ]⊗P[Y ]). The disintegrated mutual information between X and Y given Z, is1 IZ(X ;Y ) = KL(PZ [(X ,Y )]‖PZ [X ]⊗PZ [Y ]). The conditional mutual information of X and Y given Z is I(X ;Y |Z) = EIZ(X ,Y ). 2 Connections between IOMID(A) and CMIkD(A) In this section, we compare approaches for the information-theoretic analysis of generalization error, and we aim to unify the two main information-theoretic approaches for studying generalization. In Theorems 2.1 and 2.2 we will show that for any learning algorithm and any data distribution, CMIkD(A) provides a tighter measure of dependence than IOMID(A), and that one can recover IOMID(A)–based bounds from CMIkD(A) for finite parameter spaces. A fundamental difference between IOMID(A) and CMIkD(A) is that CMIkD(A) is bounded by n logk [21], while IOMID(A) can be infinite even for learning algorithms that provably generalize [6]. One of the motivations of Steinke and Zakynthinou was that proper empirical risk minimization algorithms over threshold functions on R have large IOMID(A) [4]. In contrast, some such algorithms have small CMIkD(A). Our first result shows that CMIkD(A) is never larger than IOMID(A). Theorem 2.1. For every k ≥ 2, I(W ;S) = I(W ; Z̃(k))+ I(W ;U (k)|Z̃(k)) and CMIkD(A)≤ IOMID(A). Next, we address the role of the size of the super-sample in CMI. In [21], CMI is defined using a super-sample of size 2n (k = 2) only. Our next result demonstrates that CMIkD(A) agree IOMID(A) in the limit as k→∞ when the parameter space is finite. Theorem 2.2. If the output of A takes value in a finite set then lim k→∞ CMIkD(A) = IOMID(A). Combining Theorems 1.3 and 2.2, we obtain EGED(A)≤ lim k→∞ √ 2CMIkD(A) n = √ 2IOMID(A) n , (1) when the parameter space is finite. Comparing Eq. (1) with Theorem 1.1 we observe that Eq. (1) is twice as large. In Theorem B.1, we present a refined bound based on CMIkD(A) which asymptotically match Theorem 1.1. The proofs of the results of this section appear in Appendix C. 1 Letting φ satisfy φ(Z) = IZ(X ;Y ) a.s., define I(X ,Y |Z = z) = φ(z). This notation is necessarily well defined only up to a null set under the marginal distribution of Z. 3 Sharpened Bounds based on Individual Samples We now present two novel generalization bounds and show they provide a tighter characterization of the generalization error than Theorem 1.3. The results are inspired by the improvements on IOMID(A) due to Bu et al. [6]. In particular, Theorem 3.1 bounds the expected generalization error in terms of the mutual information between the output parameter and a random subsequence of the indices U (2), given the super-sample. Theorem 3.4 provides a generalization bound in terms of the disintegrated mutual information between each individual element of U (2) and the output of the learning algorithm, W . The bound in Theorem 3.4 is an analogue of [6, Prop. 1] for Theorem 1.3. In this section as in Steinke and Zakynthinou [21], we only consider Z̃(k) and U (k) with k = 2, so we will drop the superscript from U (k). Let U = (U1, . . . ,Un). The proofs for the results of this section appear in Appendix D. Theorem 3.1. Fix m ∈ [n] and let J = (J1, . . . ,Jm) be a random subset of [n], distributed uniformly among all subsets of size m and independent from W, Z̃(2), and U. Then EGED(A)≤ E √ 2IZ̃(2)(W ;UJ |J) m . (2) By applying Jensen’s inequality to Theorem 3.1, we obtain EGED(A)≤ √ 2I(W ;UJ |Z̃(2),J) m . (3) Our next results in Theorem 3.2 let us compare Eq. (3) for different values of m = |J|. Theorem 3.2. Let m1 < m2 ∈ [n], and let J(m1),J(m2) be random subsets of [n], distributed uniformly among all subsets of size m1 and m2, respectively, and independent from W, Z̃(2), and U. Then I(W ;UJ(m1) |Z̃(2),J(m1)) m1 ≤ I(W ;UJ(m2) |Z̃(2),J(m2)) m2 . (4) Consequently, taking m2 = n, for all 1≤ m1 ≤ n E √ 2 IZ̃(2)(W ;UJ(m1) |J(m1)) m1 ≤ √ 2I(W ;U |Z̃(2)) n . (5) Corollary 3.3. EGED(A) ≤ √ 2 I(W ;UJ |Z̃(2),J)/m. The case m = |J| = n is equivalent to Theorem 1.3. The bound is increasing in m ∈ [n], and, the tightest bound is achieved when m = |J|= 1. Also, Eq. (5) shows our bound in Theorem 3.1 is tighter than Theorem 1.3 for k = 2. To further tighten Theorem 3.2 when m = 1, we show that we can pull the expectation over both Z̃(2) and J outside the concave square-root function. Theorem 3.4. Let J ∼ Unif([n]) (i.e., m = 1 above) be independent from W, Z̃(2), and U. Then EGED(A)≤E √ 2IZ̃(2),J(W ;UJ) = 1 n n ∑ i=1 E √ 2IZ̃(2)(W ;Ui). (6) Remark 3.5. Theorem 3.4 is tighter than Theorem 1.3 since 1 n n ∑ i=1 E √ 2IZ̃(2)(W ;Ui)≤ √ n ∑ i=1 2 n I(W ;Ui|Z̃(2))≤ √ 2 n I(W ;U |Z̃(2)) (7) The first inequality is Jensen’s, while the second follows from the independence of indices Ui. / 3.1 Controlling CMI bounds using KL Divergence It is often difficult to compute MI directly. One standard approach in the literature is to bound MI by the expectation of the KL divergence of the conditional distribution of the parameters given the data (the “posterior”) with respect to a “prior”. The statement below is adapted from Negrea et al. [15]. Lemma 3.6. Let X, Y , and Z be random elements. For all σ(Z)-measurable random probability measures P on the space of Y , IZ(X ;Y )≤ EZ [KL(PX ,Z [Y ]‖P)] a.s., with a.s. equality for P = EZ [PX ,Z [Y ]] = PZ [Y ]. We refer to the conditional law of W given S as the “posterior" of W given S, which we denote Q = PS[W ] = PZ̃(2),U [W ], and to P as the prior. This can be used in combination with, for example, Lemma 3.6 and Theorem 1.3 to obtain that for any Z̃(2)-measurable random prior P(Z̃(2)) EGED(A)≤ √ 2 I(W ;U |Z̃(2)) n ≤ √ 2E[KL(Q‖P(Z̃(2)))] n . (8) Note that the prior only has access to Z̃(2), therefore from its perspective the training set can take 2n different values. Alternatively, combining Lemma 3.6 and Theorem 3.1 yields EGED(A)≤ E √ 2 EZ̃(2)IZ̃(2)(W ;UJ |UJc ,J) m ≤ E √ 2EZ̃(2) [KL(Q‖P(Z̃(2),UJc ,J))] m . (9) In Eq. (9) the prior has access to n−m samples in the training set, SJc , because Z̃(2)UJc = SJc . However, since Z̃(2) is known to the prior, the training set can take only 2m distinct values from the point of view of the prior in Eq. (9). This is a significant reduction in the amount of information that can be carried by the indexes in UJ about the output hypothesis. Consequently, priors can be designed to better exploit the dependence of the output hypothesis and the index set. 3.2 Tighter Generalization bound for the case m = 1 Since the strategy above controls MI-based expressions via KL divergences, one may ask whether a bound derived with similar tools, but directly in terms of KL, can be tighter than the combination Lemma 3.6 and Theorem 3.1. The following result shows that for m = 1 a tighter bound can be derived by pulling the expectation over both UJc and J outside the concave square-root function. Theorem 3.7. Let J ∼ Unif([n]) be independent from W, U, and Z̃(2). Let Q = PZ̃(2),U [W ] and P be a σ(Z̃(2),UJc ,J)-measurable random probability measure. Then EGED(A)≤ E √ 2 KL(Q‖P). (10) Here, the KL divergence is between two σ(Z̃(2),J,U)-measurable random measures, so is random. 4 Generalization bounds for noisy, iterative algorithms We apply this new class of generalization bounds to non-convex learning. We analyze the Langevin dynamics (LD) algorithm [8], following the analysis pioneered by Pensia et al. [16]. The example we set here is a blueprint for building bounds for other iterative algorithms. Our approach is similar to the recent advances by Li et al. [14] and Negrea et al. [15], employing data-dependent estimates to obtain easily simulated bounds. We find our new results allow us to exploit past iterates to obtain tighter bounds. The influence of past iterates is seen to take the form of a hypothesis test. 4.1 Bounding Generalization Error via Hypothesis Testing The chain rule for KL divergence is a key ingredient of information-theoretic generalization error bounds for iterative algorithms [6, 14, 15, 16]. W{0,...,T} denotes the space of parameters generated by an iterative algorithm in T iterations. For any measure, ν , onW{0,...T}, and W ∼ ν , let ν0 denote the marginal law of W0, and νt| denote the conditional law of Wt given W0 . . .Wt−1. Lemma 4.1 (Chain Rule for KL). Let Q,P be probability measures onW{0,...,T} with Q0 = P0. The following lemma bounds the KL divergence involving the posterior for the terminal parameter with one involving the sum of the KL divergences over each individual step of the trajectory. Then KL(QT ‖PT )≤ KL(Q‖P) = ∑Tt=1 Q0:(t−1)[KL(Qt| ‖Pt|)] The benefit of using the chain rule to analyze the iterative algorithm are two-fold: first, we gain analytical tractability; many bounds that appear in the literature implicitly require this form of incrementation [6, 14, 15, 16]. Second, and novel to the present work, the information in the optimization trajectory can be exploited to identify U from the history of W . In order to understand how the prior may take advantage of information from the optimization trajectory, consider applying Lemma 4.1 to the KL term in Eq. (9). We have KL(QT ‖PT ( Z̃(2),UJc ,J ) )≤ T ∑ t=1 EZ̃ (2),UJc ,J [KL(Qt| ‖Pt| ( Z̃(2),UJc ,J ) )]. Here Pt| ( Z̃(2),UJc ,J ) is a σ(Z̃(2),UJc ,J,W0:t−1)-measurable random probability measure. The prior may use UJc , Z̃(2), and J to reduce the number of possible values that U can take to 2|J|. Moreover, since UJ is constant during optimization, W0,W1,W2, . . .Wt−1 may leak some information about UJ , and the prior can use this information to tighten the bound by choosing a Pt| that achieves small KL(Qt| ‖Pt|). In the special case where the prior can perfectly estimate UJ from W0,W1,W2, . . .Wt−1, we can set Pt| = Qt| and KL(Qt| ‖Pt|) will be zero. As will be seen in the next subsection, we can explicitly design a prior that uses the information in the optimization trajectory for the LD algorithm. The process by which the prior can learn from the trajectory can be viewed as an online hypothesis test, or binary decision problem, where the prior at time t allocates belief between 2m possible explanations, given by the possible values of UJ , based on the evidence provided by W0, . . .Wt . If the prior is able to identify UJ based on the W s then the bound stops accumulating, even if the gradients taken by subsequent training steps are large. This means that penalties for information obtained later in training are discounted based on the information obtained earlier in training. 4.2 Example: Langevin Dynamics Algorithm for Non-Convex Learning We apply these results to obtain generalization bounds for a gradient-based iterative noisy algorithm, the Langevin Dynamics (LD) algorithm. For classification with continuous parameters, the 0-1 loss does not provide useful gradients. Typically we optimize a surrogate objective, based on a surrogate loss, such as cross entropy. Write ˜̀ :Z×W→R for the surrogate loss and let R̃S(w) = 1n ∑ n i=1 ˜̀(Zi,w) be the empirical surrogate risk. Let ηt be the learning rate at time t, βt the inverse temperature at time t and let εt be sampled i.i.d. from N (0,Id). Then the LD algorithm iterates are given by Wt+1 =Wt −ηt∇R̃S(Wt)+ √ 2ηt βt εt . (11) The prior We will take m = 1, and construct a bespoke σ(Z̃(2),UJc ,J)-measurable prior for this problem in order to apply Theorem 3.7. The prior is based on a decision function, θ : R→ [0,1], which at each time t +1 takes in a σ(W0 . . .Wt)-measurable test statistic, ∆Yt , and returns a degree of belief in favor of the hypothesis UJ = 1 over UJ = 2. The prior predicts an LD step by replacing the unknown (to the prior) contribution to the gradient of the data point at index J with a θ̂t = θ(∆Yt)weighted average of the gradients due to each candidate {Z1,J ,Z2,J}.The conditional law of the tth iterate under the prior is a σ(Z̃(2),UJc ,J,W0, . . .Wt)-measurable random measure, as required. The exact value of the test statistic is ∆Yt = Yt,2−Yt,1, here the Y0,1 = Y0,2 = 0 and Yt,u are defined by the formula in Eq. (13). The conditional law of the tth iterate under the prior is described by Wt+1 =Wt − ηtn ( ∑ni=1 i6=J ∇ ˜̀(Zi,Wt)+ θ̂t∇ ˜̀(Z1,J ,Wt)+(1− θ̂t)∇ ˜̀(Z2,J ,Wt) ) + √ 2ηt βt εt . (12) The test statistic chosen is based on the log-likelihood-ratio test statistic for the independent mean 0 Gaussian random vectors (εs)ts=1, which is well known to be uniformly most powerful for the binary discrimination of means. Natural choices for θ are symmetric CDFs, since they treat possible values of U symmetrically, and are monotone in the test statistic. We define the two-sample incoherence at time t by ζt =∇ ˜̀(Z1,J ,Wt)−∇ ˜̀(Z2,J ,Wt). Θ denotes the set of measurable θ : R→ [0,1]. Y0,1 =Y0,2 = 0, and for t ≥ 1, Yt,1 and Yt,2 are given by (for u ∈ {1,2}) Yt,u , t ∑ i=1 βi−1 4ηi−1 ‖Wi−Wi−1 +ηi−1 n−1 n ∇R̃SJc (Wi−1)+ ηi−1 n ∇ ˜̀(Zu,J ,Wi−1)‖2. (13) Theorem 4.2 (Generalization bound for LD algorithm). Let {Wt}t∈[T ] denote the iterates of the LD algorithm. If `(Z,w) is [0,1]-bounded then E [ RD(WT )− R̂S(WT ) ] ≤ 1 n √ 2 inf θ∈Θ E √ T−1 ∑ t=0 EZ̃(2),U,Jβtηt‖ζt‖2 ( 1{UJ = 1}−θ (Yt,2−Yt,1) )2 . (14) Remark 4.3. For θ ∈Θ with 1−θ(x) = θ(−x), Eq. (14) simplifies to E [ RD(WT )− R̂S(WT ) ] ≤ 1 n √ 2 E √ T−1 ∑ t=0 EZ̃(2),U,Jβtηt‖ζt‖2θ 2 ( −1UJ (Yt,2−Yt,1) ) . (15) For instance θ(x) = 12 + 1 2 tanh(x) and θ(x) = 1 2 + 1 2 sign(x) satisfy 1−θ(x) = θ(−x). / Remark 4.4. By the law of total expectation, for any θ ∈Θ, EGED(A)≤ 12√2nE [V1 +V2] , where Vu , √ T−1 ∑ t=0 EZ̃(2),UJc ,J,UJ=uβtηt‖ζt‖2 (1{u = 1}−θ (Yt,2−Yt,1))2, u ∈ {1,2}. (16) To estimate Vu (u∈{1,2}) for fixed J, the training set is Su = {Z1, . . . ,ZJ−1, Z̃u,J ,ZJ+1, . . . ,Zn}. Hence V1, V2 can be simulated from just n+1 data points ( Z1, . . . ,ZJ−1,ZJ+1, . . . ,Zn, Z̃1,J , Z̃2,J ) ∼ D⊗(n+1). / The generalization bound in Eq. (14) does not place any restrictions on the learning rate or Lipschitz continuity of the loss or its gradient. In the next corollary we study the asymptotic properties of the bound in Eq. (14) when ˜̀ is L-Lipschitz. Then, we draw a comparison between the bound in this paper and some of the existing bounds in the literature. Corollary 4.5. Under the assumption that ˜̀ is L-Lipschitz, we have ‖ζt‖ ≤ 2L. Then, the generalization bound in Eq. (14) can be upper-bounded as E(RD(WT )−RS(WT ))≤ √ 2L n inf θ∈Θ E √ T−1 ∑ t=0 EZ̃(2),U,Jβtηt (1{UJ = 1}−θ (Yt,2−Yt,1))2. (17) Remark 4.6. Under an L-Lipschitz assumption, for the LD algorithm, Li et al. [14, Thm. 9] have E [RD(WT )−RS(WT )]≤ √ 2L n √ ∑T−1t=0 βtηt . (18) We immediately see that Eq. (17) provides a constant factor improvement over Eq. (18) by naïvely using θ : x 7→ 1/2. Our bound has order-wise improvement with respect to n over that of Bu et al. [6] and Pensia et al. [16] under the L-Lipschitz assumption. Negrea et al. [15, App. E.1] obtain E [RD(WT )−RS(WT )]≤ L2(n−1) √ ∑T−1t=0 βtηt . (19) which is a constant factor better than our bound for the choice θ : x 7→ 1/2. However, this θ essentially corresponds to no hypothesis test, yielding the same prior as in [15]. For more sophisticated choices of decision function (θ ), even under a Lipschitz-surrogate loss assumption, it is difficult to compare our bound with related work because the exact impact of θ -discounting is difficult to quantify analytically. / Remark 4.7. A prevailing method for analyzing the generalization error in [6, 14, 15, 16] for iterative algorithms is via the chain rule for KL, using priors for the joint distribution of weight vectors that are Markov, i.e., given the tth weight, the (t+1)th weight is conditionally independent from the trajectory so far. Existing results using this approach accumulate a "penalty" for each step. In [6, 14, 15], the penalty terms are, respectively, the squared Lipschitz constant, the squared norm of the gradients, and the trace of the minibatch gradient covariance. The penalty term in our paper is the squared norm of "two-sample incoherence", defined in Theorem 4.2 as the squared norm of the difference between the gradient of a randomly selected training point and the held-out point. However, the use of chain rule along with existing “Markovian” priors introduces a source of looseness, i.e., the accumulating penalty may diverge to +∞ yielding vacuous bounds (as seen in Fig. 1). The distinguishing feature of our data-dependent CMI analysis is that the penalty terms get “filtered” by the online hypothesis test via our non-Markovian prior, i.e., our prediction for t +1 depends on whole trajectory. When the true index can be inferred from the previous weights, then the penalty essentially stops accumulating. / 4.2.1 Empirical Results In order to better understand the effect of discounting and the degree of improvement due to our new bounds and more sophisticated prior, we turn to simulation studies. We present and compare the empirical evaluations of the generalization bound in Theorem 4.2 with the data-dependent generalization bounds in Li et al. [14] and Negrea et al. [15]. For brevity, many of the details behind our implementation are deferred to Appendix G. The functional form of our bounds and [14, 15] are nearly identical as all of them use the chain rule for KL divergence. Nevertheless, the summands appearing in the bounds are different. The bound in [14] depends on the squared surrogate loss gradients norm, and the generalization bound in Negrea et al. [15] depends on the squared norm of training set incoherence defined as ‖∇ ˜̀(ZJ ,Wt)− 1n−1 ∑i∈[n],i6=J∇ ˜̀(Zi,Wt)‖2 where the training set is {Z1, . . . ,Zn} and J ∼ Unif([n]). The first key difference between our bound and others is that the summand in our bound consists of two terms: squared norm of the two-sample incoherence, i.e., ‖ζt‖2, and the squared error probability of a hypothesis test at time t, given by the term ( 1{UJ = 1}−θ ( ∑ti=0 (Yi,2−Yi,1) ))2 in our bound. A consequence of this, and the second fundamental difference between our bound and existing bounds, is that our bound exhibits a trade-off in ‖ζt‖2 because large ‖ζt‖2 will make the error of the hypothesis test small on future iterations, whereas the bounds in [14, 15] are uniformly increasing with respect to the squared norm of surrogate loss gradients and the training set incoherence, respectively. In this section we empirically evaluate and compare our bound with related work across various neural network architectures and datasets. Using Monte Carlo (MC) simulation, we compared estimates of our expected generalization error bounds with estimates of the bound from [14, 15] for the MNIST [13], CIFAR10 [12], and FashionMNIST [23] datasets in Fig. 1 and Table 1. For all the plots we consider θ(x) = 12 (1+erf(x)) for our bound. Also, in the last row of Table 1, we report the unbiased estimate of our bound optimized over the choice of θ function. We plot the squared norm of the two sample incoherence and training set incoherence, as well as the squared error probability of the hypothesis test. Fig. 1 and Table 1 show that our bound is tighter, and remain non-vacuous after many more iterations. We also observe that the variances for MC estimates of our bound are smaller than those of Negrea et al. [15], and it is also smaller than Li et al. [14] for CIFAR10 and MNIST-CNN experiments. Moreover, we observe that the error probability of the hypothesis test decays with the number of iterations, which matches the intuition that, as one observes more noisy increments of the process, one is more able to determine which point is contributing to the gradient. For CIFAR10, ‖ζt‖2 is large because the generalization gap is large. However, as mentioned in the beginning of this section, large ‖ζt‖2 makes the hypothesis testing easier on subsequent iterations. For instance, after iteration 600 the error is vanishingly small for CIFAR10 experiments which results in a plateau region in the bound. We can also observe the same phenomenon for the Fashion-MNIST experiment. This property distinguishes our bound from those in [14, 15]. Results for MNIST with CNN demonstrate that ‖ζt‖2 and training set incoherence are close to each other. The reason behind this observations is that the generalization gap is small. Also, for this experiment the performance of the hypothesis testing is only slightly better than random guessing since the generalization gap is small, and it is difficult to distinguish the training samples from the test samples. This observation explains why our generalization bound is close to that of [15]. Nevertheless, the hypothesis testing performance improves with more training iterations, leading the two bounds to diverge, with our new bound performing better at later iterations. Finally, the scaling of our bound with respect to the number of iteration is tighter than in the bounds in [14, 15] as can be seen in Fig. 1. 0 100 200 300 400 500 600 700 800 900 0.0 0.2 0.4 0.6 0.8 1.0 M N IS T w it h M LP Expected generalization error Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 100 200 300 400 500 600 700 800 900 60 80 100 120 140 160 180 Squared norm of the incoherence CMI bound (ours) Negrea et al. (2019) 0 100 200 300 400 500 600 700 800 900 0.14 0.16 0.18 0.20 0.22 0.24 0.26 Squared error probability of HT 0 100 200 300 400 500 600 700 0.0 0.2 0.4 0.6 0.8 1.0 M N IS T w it h C N N Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 100 200 300 400 500 600 700 200 400 600 800 1000 CMI bound (ours) Negrea et al. (2019) 0 100 200 300 400 500 600 700 0.20 0.22 0.24 0.26 0.28 0 260 520 780 1040 1300 0.0 0.2 0.4 0.6 0.8 1.0 F -M N IS T w it h C N N Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 260 520 780 1040 1300 200 400 600 800 1000 1200 1400 CMI bound (ours) Negrea et al. (2019) 0 260 520 780 1040 1300 0.09 0.13 0.17 0.21 0.25 0.29 2× 10−1 0 230 460 690 920 1150 1380 1610 1840 2070 2300 0.0 0.2 0.4 0.6 0.8 1.0 C IF A R w it h C N N Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 230 460 690 920 1150 1380 1610 1840 2070 2300 0 500 1000 1500 2000 2500 3000 CMI bound (ours) Negrea et al. (2019) 0 230 460 690 920 1150 1380 1610 1840 2070 2300 10−4 10−3 10−2 10−1 Figure 1: Numerical results for various datasets and architectures. All the x-axes represent the training iteration. The plots in the first column depict a Monte Carlo estimate of our bounds with that of Li et al. [14] and Negrea et al. [15]. The plots in the second column compare the mean of the training set incoherence in [15] with the two-sample incoherence in our bound. Finally, the plots in the third column show the mean of the squared error probability of the hypothesis testing performed by the proposed prior in our bound. MNIST-MLP MNIST-CNN CIFAR10-CNN FMNIST-CNN Training error 4.33±0.01% 2.59±0.01% 9.39±0.36% 7.96±0.03% Generalization error 0.88±0.01% 0.55±0.01% 32.89±0.44% 3.71±0.03% Negrea et al. [15] 67.93±16.25% 20.98±5.01% 4112.63±567.08% 82.89±12.64% Li et al. [14] 600.29±1.99% 245.03±2.37% 20754.32±75.95% 598.62±3.21% CMI (Ours) 44.65±4.27% 16.51±1.41% 71.76±4.82% 48.01±4.22% CMI-OPT(Ours) 39.06±5.52% 13.24±1.53% 63.00±5.97% 41.17±5.85% Table 1: Summary of the results. The generalization bounds are reported at the end of training. Acknowledgments The authors would like to thank Blair Bilodeau and Yasaman Mahdaviyeh for feedback on drafts of this work, and Shiva Ketabi for helpful discussions on the implementation of the bounds. Funding MH is supported by the Vector Institute. JN is supported by an NSERC Vanier Canada Graduate Scholarship, and by the Vector Institute. DMR is supported by an NSERC Discovery Grant and an Ontario Early Researcher Award. This research was carried out in part while MH, JN, GKD, and DMR were visiting the Institute for Advanced Study. JN’s visit to the Institute was funded by an NSERC Michael Smith Foreign Study Supplement. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/partners. Broader Impact This work builds upon the community’s understanding of generalization error for machine learning methods. This has a positive impact on the scientific advancement of the field, and may lead to further improvements in our understanding, methodologies and applications of machine learning and AI. While there are not obvious direct societal implications of the present work, the indirect and longer term impact on society may be positive, negative or both depending on how, where and for what machine learning method that will have benefited from our research are used in the future.
1. What is the focus and contribution of the paper on information-based generalization bounds and Langevin dynamics? 2. What are the strengths of the proposed approach, particularly in terms of comprehensiveness and structured organization? 3. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Roughly, the major contribution of the manuscript is twofold: (1) Authors provide a refined information-based generalization bound. (2) Authors apply the bound to analyze the learning algorithm based on Langevin dynamics. Authors also provide some results regarding relationship between MI and CMI (Section 2) and some prior-based scenarios, which are relatively minor compared to the other results. Strengths I quite enjoyed reading this manuscript, mainly due to the comprehensiveness of the analysis. Having provided a refined generalization bound, the authors step forward to analyze the prior-based scenarios and Langevin dynamics-based learning procedures. The numerical results in section 4.2.1 were also helpful for understanding the current state of theory-practice gap on generalization of neural network models. Additionally, - the manuscript is nicely structured, - the relevant literatures are adequately (and honestly) cited, - while the main results look deceptively simple, the authors have taken sufficient effort to concretely establish the claims. Weaknesses - I think the paper could have been written more clearly (in my humble opinion). For instances: (1) IOMI is could have been more "properly," making it easier for the readers to locate. (2) Remark 3.5 states that the last inequality follows from the independence of indices U_i, which took several lines for me to verify formally. (3) Lemma 3.6. can be more clarified by stating exact which result of [15] the authors refer to and how it was adapted. Similar issue with Theorem 1.1 and Lemma 4.1 (there could be more). - The results in Section 2 are either expected, or not sufficiently discussed. Theorem 2.1. is probably well-expected (in my opinion) to the readers who are familiar of the work of Steinke and Zakynthinou. Regarding Theorem 2.2., I must admit that I failed to understand the utility of the bound, especially given the strict "finiteness" assumption. If CMI bounds are better, why is it useful to know that it can recover MI bounds? Is there any specific case where MI bound is easily analyzable and CMI bound is not? - I hate to say this, but the main ideas underlying the proof are not strikingly new.
NIPS
Title Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy, Iterative Algorithms Abstract The information-theoretic framework of Russo and Zou (2016) and Xu and Raginsky (2017) provides bounds on the generalization error of a learning algorithm in terms of the mutual information between the algorithm’s output and the training sample. In this work, we study the proposal by Steinke and Zakynthinou (2020) to reason about the generalization error of a learning algorithm by introducing a super sample that contains the training sample as a random subset and computing mutual information conditional on the super sample. We first show that these new bounds based on the conditional mutual information are tighter than those based on the unconditional mutual information. We then introduce yet tighter bounds, building on the “individual sample” idea of Bu et al. (2019) and the “data dependent” ideas of Negrea et al. (2019), using disintegrated mutual information. Finally, we apply these bounds to the study of the Langevin dynamics algorithm, showing that conditioning on the super sample allows us to exploit information in the optimization trajectory to obtain tighter bounds based on hypothesis tests. 1 Introduction Let D be an unknown distribution on a space Z , and letW be a set of parameters that index a set of predictors, ` : Z ×W → [0,1] be a bounded loss function. Consider a (randomized) learning algorithm A that selects an element W inW , based on an IID sample S = (Z1, . . . ,Zn)∼D⊗n. For w ∈W , let RD(w) = E`(Z,w) denote the risk of predictor w, and R̂S(w) = 1n ∑ m i=1 `(Zi,w) denote the empirical risk. Our interest in this paper is the (expected) generalization error of A with respect to D, EGED(A) = E[RD(W )− R̂S(W )]. In this work, we study bounds on generalization error in terms of information-theoretic measures of dependence between the data and the output of the learning algorithm. This approach was initiated by Russo and Zou [18, 19] and has since been extended [2, 3, 6, 9, 17, 24]. The basic result in this line of work is that the generalization error can be bounded in terms of the mutual information I(W ;S) between the data and the learned parameter, a quantity that has been called the information usage or input–output mutual information of A with respect to D, which we denote by IOMID(A). The following result is due to Russo and Zou [18] and Xu and Raginsky [24]. Theorem 1.1. EGED(A)≤ √ IOMID(A) 2n . 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Theorem 1.1 formalizes the intuition that a learning algorithm without heavy dependence on the training set will generalize well. This result has been extended in many directions: Raginsky et al. [17] connect variants of IOMID(A) to different notions of stability. Asadi et al. [3] establish refined bounds using chaining techniques for subgaussian processes. Bu et al. [6] obtain a tighter bound by replacing IOMID(A) with the mutual information between W and a single training data point. Negrea et al. [15] propose variants that allow for data-dependent estimates. See also [1, 2, 4, 9, 10]. Our focus in this paper is on a new class of information-theoretic bounds on generalization error, proposed by Steinke and Zakynthinou [21]. Fix k ≥ 2, let [k] = {1, . . . ,k}, let U (k) = (U1, . . . ,Un) ∼ Unif([k]n), and let Z̃(k) ∼ D⊗(k×n) be a k×n array of IID random elements in Z , independent from U (k). Let S = ( ZU1,1, . . . ,ZUn,n ) and let W be a random element inW such that conditional on S, U (k), and Z̃(k), W has distribution A(S). It follows that, conditional on S, W is independent from U (k) and Z̃(k). By construction, the data set S is hidden inside the super sample; the indices U (k) specify where. Steinke and Zakynthinou [21] use these additional structures to define: Definition 1.2. The conditional mutual information of A w.r.t. D is CMIkD(A) = I(W ;U (k)|Z̃(k)). Intuitively, CMIkD(A) captures how well we can recognize which samples from the given supersample Z̃(k) were in the training set, given the learned parameters. This intuition and the connection of CMIkD(A) with the membership attack [20] can be formalized using Fano’s inequality, showing that CMIkD(A) can be used to lower bound the error of any estimator of U (k) given W and Z̃(k). (See Appendix A.) Steinke and Zakynthinou [21] connect CMIkD(A) with well-known notions in learning theory such as distributional stability, differential privacy, and VC dimension, and establish the following bound [21, Thm. 5.1] in the case k = 2, the extension to k ≥ 2 being straightforward: Theorem 1.3. EGED(A)≤ √ 2CMI kD(A) n . This paper improves our understanding of the framework introduced by Steinke and Zakynthinou [21], identifies tighter bounds, and applies these techniques to the analysis of a real algorithm. In Section 2, we present several formal connections between the two aforementioned informationtheoretic approaches for studying generalization. Our first result bridges IOMID(A) and CMIkD(A), showing that for any learning algorithm, any data distribution, and any k, CMIkD(A) is less that IOMID(A). We also show that CMIkD(A) converges to IOMID(A) as k→∞ when |W| is finite. In Section 3, we establish two novel bounds on generalization error using the random index and super sample structure of Steinke and Zakynthinou, and show that both our bounds are tighter than those based on CMIkD(A). Finally, in Section 4, we show how to construct generalization error bounds for noisy, iterative algorithms using the generalization bound proposed in Section 3. Using the Langevin dynamics algorithm as our example, we introduce a new type of prior for iterative algorithms that “learns” from the past trajectory, using a form of hypothesis testing, in order to not “pay” again for information obtained at previous iterations. Experiments show that our new bound is tighter than [14, 15], especially in the late stages of training, where the hypothesis test component of the bound discounts the contributions of new gradients. Our new bounds are non-vacuous for a great deal more epochs than related work, and do not diverge or exceed 1 even when severe overfitting occurs. 1.1 Contributions 1. We characterize the connections between the IOMID(A) and CMIkD(A). We show that CMIkD(A) is always less than the IOMID(A) for any data distribution, learning algorithms and k. Further, we prove that CMIkD(A) converges to IOMID(A) when k goes to infinity for finite parameter spaces. 2. We provide novel generalization bounds that relate generalization to the mutual information between learned parameters and a random subset of the random indices U1, . . .Un. 3. We apply our generalization bounds to the Langevin dynamics algorithm by constructing a specific generalized prior and posterior. We employ a generalized prior that learns about the values of the indices U from the optimization trajectory. To our knowledge, this is the first generalized prior that learns about the dataset from the iterates of the learning algorithm. 4. We show empirically that our bound on the expected generalization error of Langevin dynamics algorithm is tighter than other existing bounds in the literature. 1.2 Definitions from Probability and Information Theory Let S,T be measurable spaces, letM1(S) be the space of probability measures on S, and define a probability kernel from S to T to be a measurable map from S toM1(T ). For random elements X in S and Y in T , write P[X ] ∈M1(S) for the distribution of X and write PY [X ] for (a regular version of) the conditional distribution of X given Y , viewed as a σ(Y )-measurable random element inM1(S). Recall that PY [X ] is a regular version if, for some probability kernel κ from T to S, we have PY [X ] = κ(Y ) a.s. . If Y is σ(X)-measurable then Y is a function of X . If random measure, P, is σ(X)-measurable then the measure P is determined by X , but a random element Y with PX [Y ] = P is not X measurable unless it is degenerate. If X is a random variable, write EX for the expectation of X and write EY X or E[X |Y ] for (an arbitrary version of) the conditional expectation of X given Y , which is Y -measurable. For a random element X on S and a probability kernel P from S to T , the composition P(X) := P◦X is a σ(X)-measurable random measure of a random element taking values in T . We occasionally use this notation to refer to a kernel P implicitly by the way it acts on X . Let P, Q be probability measures on a measurable space S. For a P-integrable or nonnegative measurable function f , let P[ f ] = ∫ f dP. When Q is absolutely continuous with respect to P, denoted Q P, we write dQdP for the Radon–Nikodym derivative of Q with respect to P. We rely on several notions from information theory: The KL divergence of Q with respect to P, denoted KL(Q‖P), is Q[log dQdP ] when Q P and∞ otherwise. Let X , Y , and Z be random elements, and let⊗ form product measures. The mutual information between X and Y is I(X ;Y ) = KL(P[(X ,Y )]‖P[X ]⊗P[Y ]). The disintegrated mutual information between X and Y given Z, is1 IZ(X ;Y ) = KL(PZ [(X ,Y )]‖PZ [X ]⊗PZ [Y ]). The conditional mutual information of X and Y given Z is I(X ;Y |Z) = EIZ(X ,Y ). 2 Connections between IOMID(A) and CMIkD(A) In this section, we compare approaches for the information-theoretic analysis of generalization error, and we aim to unify the two main information-theoretic approaches for studying generalization. In Theorems 2.1 and 2.2 we will show that for any learning algorithm and any data distribution, CMIkD(A) provides a tighter measure of dependence than IOMID(A), and that one can recover IOMID(A)–based bounds from CMIkD(A) for finite parameter spaces. A fundamental difference between IOMID(A) and CMIkD(A) is that CMIkD(A) is bounded by n logk [21], while IOMID(A) can be infinite even for learning algorithms that provably generalize [6]. One of the motivations of Steinke and Zakynthinou was that proper empirical risk minimization algorithms over threshold functions on R have large IOMID(A) [4]. In contrast, some such algorithms have small CMIkD(A). Our first result shows that CMIkD(A) is never larger than IOMID(A). Theorem 2.1. For every k ≥ 2, I(W ;S) = I(W ; Z̃(k))+ I(W ;U (k)|Z̃(k)) and CMIkD(A)≤ IOMID(A). Next, we address the role of the size of the super-sample in CMI. In [21], CMI is defined using a super-sample of size 2n (k = 2) only. Our next result demonstrates that CMIkD(A) agree IOMID(A) in the limit as k→∞ when the parameter space is finite. Theorem 2.2. If the output of A takes value in a finite set then lim k→∞ CMIkD(A) = IOMID(A). Combining Theorems 1.3 and 2.2, we obtain EGED(A)≤ lim k→∞ √ 2CMIkD(A) n = √ 2IOMID(A) n , (1) when the parameter space is finite. Comparing Eq. (1) with Theorem 1.1 we observe that Eq. (1) is twice as large. In Theorem B.1, we present a refined bound based on CMIkD(A) which asymptotically match Theorem 1.1. The proofs of the results of this section appear in Appendix C. 1 Letting φ satisfy φ(Z) = IZ(X ;Y ) a.s., define I(X ,Y |Z = z) = φ(z). This notation is necessarily well defined only up to a null set under the marginal distribution of Z. 3 Sharpened Bounds based on Individual Samples We now present two novel generalization bounds and show they provide a tighter characterization of the generalization error than Theorem 1.3. The results are inspired by the improvements on IOMID(A) due to Bu et al. [6]. In particular, Theorem 3.1 bounds the expected generalization error in terms of the mutual information between the output parameter and a random subsequence of the indices U (2), given the super-sample. Theorem 3.4 provides a generalization bound in terms of the disintegrated mutual information between each individual element of U (2) and the output of the learning algorithm, W . The bound in Theorem 3.4 is an analogue of [6, Prop. 1] for Theorem 1.3. In this section as in Steinke and Zakynthinou [21], we only consider Z̃(k) and U (k) with k = 2, so we will drop the superscript from U (k). Let U = (U1, . . . ,Un). The proofs for the results of this section appear in Appendix D. Theorem 3.1. Fix m ∈ [n] and let J = (J1, . . . ,Jm) be a random subset of [n], distributed uniformly among all subsets of size m and independent from W, Z̃(2), and U. Then EGED(A)≤ E √ 2IZ̃(2)(W ;UJ |J) m . (2) By applying Jensen’s inequality to Theorem 3.1, we obtain EGED(A)≤ √ 2I(W ;UJ |Z̃(2),J) m . (3) Our next results in Theorem 3.2 let us compare Eq. (3) for different values of m = |J|. Theorem 3.2. Let m1 < m2 ∈ [n], and let J(m1),J(m2) be random subsets of [n], distributed uniformly among all subsets of size m1 and m2, respectively, and independent from W, Z̃(2), and U. Then I(W ;UJ(m1) |Z̃(2),J(m1)) m1 ≤ I(W ;UJ(m2) |Z̃(2),J(m2)) m2 . (4) Consequently, taking m2 = n, for all 1≤ m1 ≤ n E √ 2 IZ̃(2)(W ;UJ(m1) |J(m1)) m1 ≤ √ 2I(W ;U |Z̃(2)) n . (5) Corollary 3.3. EGED(A) ≤ √ 2 I(W ;UJ |Z̃(2),J)/m. The case m = |J| = n is equivalent to Theorem 1.3. The bound is increasing in m ∈ [n], and, the tightest bound is achieved when m = |J|= 1. Also, Eq. (5) shows our bound in Theorem 3.1 is tighter than Theorem 1.3 for k = 2. To further tighten Theorem 3.2 when m = 1, we show that we can pull the expectation over both Z̃(2) and J outside the concave square-root function. Theorem 3.4. Let J ∼ Unif([n]) (i.e., m = 1 above) be independent from W, Z̃(2), and U. Then EGED(A)≤E √ 2IZ̃(2),J(W ;UJ) = 1 n n ∑ i=1 E √ 2IZ̃(2)(W ;Ui). (6) Remark 3.5. Theorem 3.4 is tighter than Theorem 1.3 since 1 n n ∑ i=1 E √ 2IZ̃(2)(W ;Ui)≤ √ n ∑ i=1 2 n I(W ;Ui|Z̃(2))≤ √ 2 n I(W ;U |Z̃(2)) (7) The first inequality is Jensen’s, while the second follows from the independence of indices Ui. / 3.1 Controlling CMI bounds using KL Divergence It is often difficult to compute MI directly. One standard approach in the literature is to bound MI by the expectation of the KL divergence of the conditional distribution of the parameters given the data (the “posterior”) with respect to a “prior”. The statement below is adapted from Negrea et al. [15]. Lemma 3.6. Let X, Y , and Z be random elements. For all σ(Z)-measurable random probability measures P on the space of Y , IZ(X ;Y )≤ EZ [KL(PX ,Z [Y ]‖P)] a.s., with a.s. equality for P = EZ [PX ,Z [Y ]] = PZ [Y ]. We refer to the conditional law of W given S as the “posterior" of W given S, which we denote Q = PS[W ] = PZ̃(2),U [W ], and to P as the prior. This can be used in combination with, for example, Lemma 3.6 and Theorem 1.3 to obtain that for any Z̃(2)-measurable random prior P(Z̃(2)) EGED(A)≤ √ 2 I(W ;U |Z̃(2)) n ≤ √ 2E[KL(Q‖P(Z̃(2)))] n . (8) Note that the prior only has access to Z̃(2), therefore from its perspective the training set can take 2n different values. Alternatively, combining Lemma 3.6 and Theorem 3.1 yields EGED(A)≤ E √ 2 EZ̃(2)IZ̃(2)(W ;UJ |UJc ,J) m ≤ E √ 2EZ̃(2) [KL(Q‖P(Z̃(2),UJc ,J))] m . (9) In Eq. (9) the prior has access to n−m samples in the training set, SJc , because Z̃(2)UJc = SJc . However, since Z̃(2) is known to the prior, the training set can take only 2m distinct values from the point of view of the prior in Eq. (9). This is a significant reduction in the amount of information that can be carried by the indexes in UJ about the output hypothesis. Consequently, priors can be designed to better exploit the dependence of the output hypothesis and the index set. 3.2 Tighter Generalization bound for the case m = 1 Since the strategy above controls MI-based expressions via KL divergences, one may ask whether a bound derived with similar tools, but directly in terms of KL, can be tighter than the combination Lemma 3.6 and Theorem 3.1. The following result shows that for m = 1 a tighter bound can be derived by pulling the expectation over both UJc and J outside the concave square-root function. Theorem 3.7. Let J ∼ Unif([n]) be independent from W, U, and Z̃(2). Let Q = PZ̃(2),U [W ] and P be a σ(Z̃(2),UJc ,J)-measurable random probability measure. Then EGED(A)≤ E √ 2 KL(Q‖P). (10) Here, the KL divergence is between two σ(Z̃(2),J,U)-measurable random measures, so is random. 4 Generalization bounds for noisy, iterative algorithms We apply this new class of generalization bounds to non-convex learning. We analyze the Langevin dynamics (LD) algorithm [8], following the analysis pioneered by Pensia et al. [16]. The example we set here is a blueprint for building bounds for other iterative algorithms. Our approach is similar to the recent advances by Li et al. [14] and Negrea et al. [15], employing data-dependent estimates to obtain easily simulated bounds. We find our new results allow us to exploit past iterates to obtain tighter bounds. The influence of past iterates is seen to take the form of a hypothesis test. 4.1 Bounding Generalization Error via Hypothesis Testing The chain rule for KL divergence is a key ingredient of information-theoretic generalization error bounds for iterative algorithms [6, 14, 15, 16]. W{0,...,T} denotes the space of parameters generated by an iterative algorithm in T iterations. For any measure, ν , onW{0,...T}, and W ∼ ν , let ν0 denote the marginal law of W0, and νt| denote the conditional law of Wt given W0 . . .Wt−1. Lemma 4.1 (Chain Rule for KL). Let Q,P be probability measures onW{0,...,T} with Q0 = P0. The following lemma bounds the KL divergence involving the posterior for the terminal parameter with one involving the sum of the KL divergences over each individual step of the trajectory. Then KL(QT ‖PT )≤ KL(Q‖P) = ∑Tt=1 Q0:(t−1)[KL(Qt| ‖Pt|)] The benefit of using the chain rule to analyze the iterative algorithm are two-fold: first, we gain analytical tractability; many bounds that appear in the literature implicitly require this form of incrementation [6, 14, 15, 16]. Second, and novel to the present work, the information in the optimization trajectory can be exploited to identify U from the history of W . In order to understand how the prior may take advantage of information from the optimization trajectory, consider applying Lemma 4.1 to the KL term in Eq. (9). We have KL(QT ‖PT ( Z̃(2),UJc ,J ) )≤ T ∑ t=1 EZ̃ (2),UJc ,J [KL(Qt| ‖Pt| ( Z̃(2),UJc ,J ) )]. Here Pt| ( Z̃(2),UJc ,J ) is a σ(Z̃(2),UJc ,J,W0:t−1)-measurable random probability measure. The prior may use UJc , Z̃(2), and J to reduce the number of possible values that U can take to 2|J|. Moreover, since UJ is constant during optimization, W0,W1,W2, . . .Wt−1 may leak some information about UJ , and the prior can use this information to tighten the bound by choosing a Pt| that achieves small KL(Qt| ‖Pt|). In the special case where the prior can perfectly estimate UJ from W0,W1,W2, . . .Wt−1, we can set Pt| = Qt| and KL(Qt| ‖Pt|) will be zero. As will be seen in the next subsection, we can explicitly design a prior that uses the information in the optimization trajectory for the LD algorithm. The process by which the prior can learn from the trajectory can be viewed as an online hypothesis test, or binary decision problem, where the prior at time t allocates belief between 2m possible explanations, given by the possible values of UJ , based on the evidence provided by W0, . . .Wt . If the prior is able to identify UJ based on the W s then the bound stops accumulating, even if the gradients taken by subsequent training steps are large. This means that penalties for information obtained later in training are discounted based on the information obtained earlier in training. 4.2 Example: Langevin Dynamics Algorithm for Non-Convex Learning We apply these results to obtain generalization bounds for a gradient-based iterative noisy algorithm, the Langevin Dynamics (LD) algorithm. For classification with continuous parameters, the 0-1 loss does not provide useful gradients. Typically we optimize a surrogate objective, based on a surrogate loss, such as cross entropy. Write ˜̀ :Z×W→R for the surrogate loss and let R̃S(w) = 1n ∑ n i=1 ˜̀(Zi,w) be the empirical surrogate risk. Let ηt be the learning rate at time t, βt the inverse temperature at time t and let εt be sampled i.i.d. from N (0,Id). Then the LD algorithm iterates are given by Wt+1 =Wt −ηt∇R̃S(Wt)+ √ 2ηt βt εt . (11) The prior We will take m = 1, and construct a bespoke σ(Z̃(2),UJc ,J)-measurable prior for this problem in order to apply Theorem 3.7. The prior is based on a decision function, θ : R→ [0,1], which at each time t +1 takes in a σ(W0 . . .Wt)-measurable test statistic, ∆Yt , and returns a degree of belief in favor of the hypothesis UJ = 1 over UJ = 2. The prior predicts an LD step by replacing the unknown (to the prior) contribution to the gradient of the data point at index J with a θ̂t = θ(∆Yt)weighted average of the gradients due to each candidate {Z1,J ,Z2,J}.The conditional law of the tth iterate under the prior is a σ(Z̃(2),UJc ,J,W0, . . .Wt)-measurable random measure, as required. The exact value of the test statistic is ∆Yt = Yt,2−Yt,1, here the Y0,1 = Y0,2 = 0 and Yt,u are defined by the formula in Eq. (13). The conditional law of the tth iterate under the prior is described by Wt+1 =Wt − ηtn ( ∑ni=1 i6=J ∇ ˜̀(Zi,Wt)+ θ̂t∇ ˜̀(Z1,J ,Wt)+(1− θ̂t)∇ ˜̀(Z2,J ,Wt) ) + √ 2ηt βt εt . (12) The test statistic chosen is based on the log-likelihood-ratio test statistic for the independent mean 0 Gaussian random vectors (εs)ts=1, which is well known to be uniformly most powerful for the binary discrimination of means. Natural choices for θ are symmetric CDFs, since they treat possible values of U symmetrically, and are monotone in the test statistic. We define the two-sample incoherence at time t by ζt =∇ ˜̀(Z1,J ,Wt)−∇ ˜̀(Z2,J ,Wt). Θ denotes the set of measurable θ : R→ [0,1]. Y0,1 =Y0,2 = 0, and for t ≥ 1, Yt,1 and Yt,2 are given by (for u ∈ {1,2}) Yt,u , t ∑ i=1 βi−1 4ηi−1 ‖Wi−Wi−1 +ηi−1 n−1 n ∇R̃SJc (Wi−1)+ ηi−1 n ∇ ˜̀(Zu,J ,Wi−1)‖2. (13) Theorem 4.2 (Generalization bound for LD algorithm). Let {Wt}t∈[T ] denote the iterates of the LD algorithm. If `(Z,w) is [0,1]-bounded then E [ RD(WT )− R̂S(WT ) ] ≤ 1 n √ 2 inf θ∈Θ E √ T−1 ∑ t=0 EZ̃(2),U,Jβtηt‖ζt‖2 ( 1{UJ = 1}−θ (Yt,2−Yt,1) )2 . (14) Remark 4.3. For θ ∈Θ with 1−θ(x) = θ(−x), Eq. (14) simplifies to E [ RD(WT )− R̂S(WT ) ] ≤ 1 n √ 2 E √ T−1 ∑ t=0 EZ̃(2),U,Jβtηt‖ζt‖2θ 2 ( −1UJ (Yt,2−Yt,1) ) . (15) For instance θ(x) = 12 + 1 2 tanh(x) and θ(x) = 1 2 + 1 2 sign(x) satisfy 1−θ(x) = θ(−x). / Remark 4.4. By the law of total expectation, for any θ ∈Θ, EGED(A)≤ 12√2nE [V1 +V2] , where Vu , √ T−1 ∑ t=0 EZ̃(2),UJc ,J,UJ=uβtηt‖ζt‖2 (1{u = 1}−θ (Yt,2−Yt,1))2, u ∈ {1,2}. (16) To estimate Vu (u∈{1,2}) for fixed J, the training set is Su = {Z1, . . . ,ZJ−1, Z̃u,J ,ZJ+1, . . . ,Zn}. Hence V1, V2 can be simulated from just n+1 data points ( Z1, . . . ,ZJ−1,ZJ+1, . . . ,Zn, Z̃1,J , Z̃2,J ) ∼ D⊗(n+1). / The generalization bound in Eq. (14) does not place any restrictions on the learning rate or Lipschitz continuity of the loss or its gradient. In the next corollary we study the asymptotic properties of the bound in Eq. (14) when ˜̀ is L-Lipschitz. Then, we draw a comparison between the bound in this paper and some of the existing bounds in the literature. Corollary 4.5. Under the assumption that ˜̀ is L-Lipschitz, we have ‖ζt‖ ≤ 2L. Then, the generalization bound in Eq. (14) can be upper-bounded as E(RD(WT )−RS(WT ))≤ √ 2L n inf θ∈Θ E √ T−1 ∑ t=0 EZ̃(2),U,Jβtηt (1{UJ = 1}−θ (Yt,2−Yt,1))2. (17) Remark 4.6. Under an L-Lipschitz assumption, for the LD algorithm, Li et al. [14, Thm. 9] have E [RD(WT )−RS(WT )]≤ √ 2L n √ ∑T−1t=0 βtηt . (18) We immediately see that Eq. (17) provides a constant factor improvement over Eq. (18) by naïvely using θ : x 7→ 1/2. Our bound has order-wise improvement with respect to n over that of Bu et al. [6] and Pensia et al. [16] under the L-Lipschitz assumption. Negrea et al. [15, App. E.1] obtain E [RD(WT )−RS(WT )]≤ L2(n−1) √ ∑T−1t=0 βtηt . (19) which is a constant factor better than our bound for the choice θ : x 7→ 1/2. However, this θ essentially corresponds to no hypothesis test, yielding the same prior as in [15]. For more sophisticated choices of decision function (θ ), even under a Lipschitz-surrogate loss assumption, it is difficult to compare our bound with related work because the exact impact of θ -discounting is difficult to quantify analytically. / Remark 4.7. A prevailing method for analyzing the generalization error in [6, 14, 15, 16] for iterative algorithms is via the chain rule for KL, using priors for the joint distribution of weight vectors that are Markov, i.e., given the tth weight, the (t+1)th weight is conditionally independent from the trajectory so far. Existing results using this approach accumulate a "penalty" for each step. In [6, 14, 15], the penalty terms are, respectively, the squared Lipschitz constant, the squared norm of the gradients, and the trace of the minibatch gradient covariance. The penalty term in our paper is the squared norm of "two-sample incoherence", defined in Theorem 4.2 as the squared norm of the difference between the gradient of a randomly selected training point and the held-out point. However, the use of chain rule along with existing “Markovian” priors introduces a source of looseness, i.e., the accumulating penalty may diverge to +∞ yielding vacuous bounds (as seen in Fig. 1). The distinguishing feature of our data-dependent CMI analysis is that the penalty terms get “filtered” by the online hypothesis test via our non-Markovian prior, i.e., our prediction for t +1 depends on whole trajectory. When the true index can be inferred from the previous weights, then the penalty essentially stops accumulating. / 4.2.1 Empirical Results In order to better understand the effect of discounting and the degree of improvement due to our new bounds and more sophisticated prior, we turn to simulation studies. We present and compare the empirical evaluations of the generalization bound in Theorem 4.2 with the data-dependent generalization bounds in Li et al. [14] and Negrea et al. [15]. For brevity, many of the details behind our implementation are deferred to Appendix G. The functional form of our bounds and [14, 15] are nearly identical as all of them use the chain rule for KL divergence. Nevertheless, the summands appearing in the bounds are different. The bound in [14] depends on the squared surrogate loss gradients norm, and the generalization bound in Negrea et al. [15] depends on the squared norm of training set incoherence defined as ‖∇ ˜̀(ZJ ,Wt)− 1n−1 ∑i∈[n],i6=J∇ ˜̀(Zi,Wt)‖2 where the training set is {Z1, . . . ,Zn} and J ∼ Unif([n]). The first key difference between our bound and others is that the summand in our bound consists of two terms: squared norm of the two-sample incoherence, i.e., ‖ζt‖2, and the squared error probability of a hypothesis test at time t, given by the term ( 1{UJ = 1}−θ ( ∑ti=0 (Yi,2−Yi,1) ))2 in our bound. A consequence of this, and the second fundamental difference between our bound and existing bounds, is that our bound exhibits a trade-off in ‖ζt‖2 because large ‖ζt‖2 will make the error of the hypothesis test small on future iterations, whereas the bounds in [14, 15] are uniformly increasing with respect to the squared norm of surrogate loss gradients and the training set incoherence, respectively. In this section we empirically evaluate and compare our bound with related work across various neural network architectures and datasets. Using Monte Carlo (MC) simulation, we compared estimates of our expected generalization error bounds with estimates of the bound from [14, 15] for the MNIST [13], CIFAR10 [12], and FashionMNIST [23] datasets in Fig. 1 and Table 1. For all the plots we consider θ(x) = 12 (1+erf(x)) for our bound. Also, in the last row of Table 1, we report the unbiased estimate of our bound optimized over the choice of θ function. We plot the squared norm of the two sample incoherence and training set incoherence, as well as the squared error probability of the hypothesis test. Fig. 1 and Table 1 show that our bound is tighter, and remain non-vacuous after many more iterations. We also observe that the variances for MC estimates of our bound are smaller than those of Negrea et al. [15], and it is also smaller than Li et al. [14] for CIFAR10 and MNIST-CNN experiments. Moreover, we observe that the error probability of the hypothesis test decays with the number of iterations, which matches the intuition that, as one observes more noisy increments of the process, one is more able to determine which point is contributing to the gradient. For CIFAR10, ‖ζt‖2 is large because the generalization gap is large. However, as mentioned in the beginning of this section, large ‖ζt‖2 makes the hypothesis testing easier on subsequent iterations. For instance, after iteration 600 the error is vanishingly small for CIFAR10 experiments which results in a plateau region in the bound. We can also observe the same phenomenon for the Fashion-MNIST experiment. This property distinguishes our bound from those in [14, 15]. Results for MNIST with CNN demonstrate that ‖ζt‖2 and training set incoherence are close to each other. The reason behind this observations is that the generalization gap is small. Also, for this experiment the performance of the hypothesis testing is only slightly better than random guessing since the generalization gap is small, and it is difficult to distinguish the training samples from the test samples. This observation explains why our generalization bound is close to that of [15]. Nevertheless, the hypothesis testing performance improves with more training iterations, leading the two bounds to diverge, with our new bound performing better at later iterations. Finally, the scaling of our bound with respect to the number of iteration is tighter than in the bounds in [14, 15] as can be seen in Fig. 1. 0 100 200 300 400 500 600 700 800 900 0.0 0.2 0.4 0.6 0.8 1.0 M N IS T w it h M LP Expected generalization error Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 100 200 300 400 500 600 700 800 900 60 80 100 120 140 160 180 Squared norm of the incoherence CMI bound (ours) Negrea et al. (2019) 0 100 200 300 400 500 600 700 800 900 0.14 0.16 0.18 0.20 0.22 0.24 0.26 Squared error probability of HT 0 100 200 300 400 500 600 700 0.0 0.2 0.4 0.6 0.8 1.0 M N IS T w it h C N N Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 100 200 300 400 500 600 700 200 400 600 800 1000 CMI bound (ours) Negrea et al. (2019) 0 100 200 300 400 500 600 700 0.20 0.22 0.24 0.26 0.28 0 260 520 780 1040 1300 0.0 0.2 0.4 0.6 0.8 1.0 F -M N IS T w it h C N N Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 260 520 780 1040 1300 200 400 600 800 1000 1200 1400 CMI bound (ours) Negrea et al. (2019) 0 260 520 780 1040 1300 0.09 0.13 0.17 0.21 0.25 0.29 2× 10−1 0 230 460 690 920 1150 1380 1610 1840 2070 2300 0.0 0.2 0.4 0.6 0.8 1.0 C IF A R w it h C N N Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 230 460 690 920 1150 1380 1610 1840 2070 2300 0 500 1000 1500 2000 2500 3000 CMI bound (ours) Negrea et al. (2019) 0 230 460 690 920 1150 1380 1610 1840 2070 2300 10−4 10−3 10−2 10−1 Figure 1: Numerical results for various datasets and architectures. All the x-axes represent the training iteration. The plots in the first column depict a Monte Carlo estimate of our bounds with that of Li et al. [14] and Negrea et al. [15]. The plots in the second column compare the mean of the training set incoherence in [15] with the two-sample incoherence in our bound. Finally, the plots in the third column show the mean of the squared error probability of the hypothesis testing performed by the proposed prior in our bound. MNIST-MLP MNIST-CNN CIFAR10-CNN FMNIST-CNN Training error 4.33±0.01% 2.59±0.01% 9.39±0.36% 7.96±0.03% Generalization error 0.88±0.01% 0.55±0.01% 32.89±0.44% 3.71±0.03% Negrea et al. [15] 67.93±16.25% 20.98±5.01% 4112.63±567.08% 82.89±12.64% Li et al. [14] 600.29±1.99% 245.03±2.37% 20754.32±75.95% 598.62±3.21% CMI (Ours) 44.65±4.27% 16.51±1.41% 71.76±4.82% 48.01±4.22% CMI-OPT(Ours) 39.06±5.52% 13.24±1.53% 63.00±5.97% 41.17±5.85% Table 1: Summary of the results. The generalization bounds are reported at the end of training. Acknowledgments The authors would like to thank Blair Bilodeau and Yasaman Mahdaviyeh for feedback on drafts of this work, and Shiva Ketabi for helpful discussions on the implementation of the bounds. Funding MH is supported by the Vector Institute. JN is supported by an NSERC Vanier Canada Graduate Scholarship, and by the Vector Institute. DMR is supported by an NSERC Discovery Grant and an Ontario Early Researcher Award. This research was carried out in part while MH, JN, GKD, and DMR were visiting the Institute for Advanced Study. JN’s visit to the Institute was funded by an NSERC Michael Smith Foreign Study Supplement. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/partners. Broader Impact This work builds upon the community’s understanding of generalization error for machine learning methods. This has a positive impact on the scientific advancement of the field, and may lead to further improvements in our understanding, methodologies and applications of machine learning and AI. While there are not obvious direct societal implications of the present work, the indirect and longer term impact on society may be positive, negative or both depending on how, where and for what machine learning method that will have benefited from our research are used in the future.
1. What is the focus and contribution of the paper regarding upper bounds on the expected generalization gap? 2. What are the strengths of the proposed approach, particularly in its theoretical grounding and empirical evaluation? 3. Do you have any concerns or questions about the relevance and significance of the paper's findings? 4. How does the reviewer assess the novelty and impact of the work compared to prior research? 5. Are there any open questions or areas for further investigation raised by the paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper deals with upper bounds on the expected generalization gap (EGE) of classification algorithms, specifically looking at data-dependent bounds (i.e. that can be estimated from the training data itself). It builds on the recent work of Russo and Zou who proved that this quantity can be upper bounded using the mutual information between the output of the algorithm and the training sample (IOMI), and Steinke and Zakynthinou who showed that this quantity can be upper bounded by the conditional mutual information between the output of the algorithm and a selection function conditioned on a super-sample (CMI^k). This work first shows that CMI^k is always smaller than IOMI and converges to IOMI as k goes to infinity, and then provides a refined bound where the CMI is generalized to use a random subset of the selection function. This new bound is then applied to the Langevin dynamics algorithm (leveraging the fact that one can use a prior that depends on optimization trajectory) to derive an improved generalization bound for this algorithm which is shown empirically to have a smaller numerical value than previous bounds. Strengths Soundness: - theoretical grounding: the proofs seem to be correct to the extent I could verify. - empirical evaluation: the experiments seem reasonable and appear to support the claim of the improved tightness of the proposed bound Significance and novelty: Relevance: Other comments: Weaknesses Relevance: One important question is whether the proposed bounds lead to new insights. The empirical demonstration of obtaining a tighter bound for the Langevin dynamics algorithm is certainly a good first step to show that the new data-dependent quantity that appears in the bounds captures partially how favorable the distribution is for the algorithm, but it is hard to say that one can derive from that an improved understanding of the algorithm itself. Also it is not clear whether this could lead to designing an even better algorithm. So I believe that the question of "what can be gained from the new bound" is not really addressed, which is why my score is not higher.
NIPS
Title Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy, Iterative Algorithms Abstract The information-theoretic framework of Russo and Zou (2016) and Xu and Raginsky (2017) provides bounds on the generalization error of a learning algorithm in terms of the mutual information between the algorithm’s output and the training sample. In this work, we study the proposal by Steinke and Zakynthinou (2020) to reason about the generalization error of a learning algorithm by introducing a super sample that contains the training sample as a random subset and computing mutual information conditional on the super sample. We first show that these new bounds based on the conditional mutual information are tighter than those based on the unconditional mutual information. We then introduce yet tighter bounds, building on the “individual sample” idea of Bu et al. (2019) and the “data dependent” ideas of Negrea et al. (2019), using disintegrated mutual information. Finally, we apply these bounds to the study of the Langevin dynamics algorithm, showing that conditioning on the super sample allows us to exploit information in the optimization trajectory to obtain tighter bounds based on hypothesis tests. 1 Introduction Let D be an unknown distribution on a space Z , and letW be a set of parameters that index a set of predictors, ` : Z ×W → [0,1] be a bounded loss function. Consider a (randomized) learning algorithm A that selects an element W inW , based on an IID sample S = (Z1, . . . ,Zn)∼D⊗n. For w ∈W , let RD(w) = E`(Z,w) denote the risk of predictor w, and R̂S(w) = 1n ∑ m i=1 `(Zi,w) denote the empirical risk. Our interest in this paper is the (expected) generalization error of A with respect to D, EGED(A) = E[RD(W )− R̂S(W )]. In this work, we study bounds on generalization error in terms of information-theoretic measures of dependence between the data and the output of the learning algorithm. This approach was initiated by Russo and Zou [18, 19] and has since been extended [2, 3, 6, 9, 17, 24]. The basic result in this line of work is that the generalization error can be bounded in terms of the mutual information I(W ;S) between the data and the learned parameter, a quantity that has been called the information usage or input–output mutual information of A with respect to D, which we denote by IOMID(A). The following result is due to Russo and Zou [18] and Xu and Raginsky [24]. Theorem 1.1. EGED(A)≤ √ IOMID(A) 2n . 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Theorem 1.1 formalizes the intuition that a learning algorithm without heavy dependence on the training set will generalize well. This result has been extended in many directions: Raginsky et al. [17] connect variants of IOMID(A) to different notions of stability. Asadi et al. [3] establish refined bounds using chaining techniques for subgaussian processes. Bu et al. [6] obtain a tighter bound by replacing IOMID(A) with the mutual information between W and a single training data point. Negrea et al. [15] propose variants that allow for data-dependent estimates. See also [1, 2, 4, 9, 10]. Our focus in this paper is on a new class of information-theoretic bounds on generalization error, proposed by Steinke and Zakynthinou [21]. Fix k ≥ 2, let [k] = {1, . . . ,k}, let U (k) = (U1, . . . ,Un) ∼ Unif([k]n), and let Z̃(k) ∼ D⊗(k×n) be a k×n array of IID random elements in Z , independent from U (k). Let S = ( ZU1,1, . . . ,ZUn,n ) and let W be a random element inW such that conditional on S, U (k), and Z̃(k), W has distribution A(S). It follows that, conditional on S, W is independent from U (k) and Z̃(k). By construction, the data set S is hidden inside the super sample; the indices U (k) specify where. Steinke and Zakynthinou [21] use these additional structures to define: Definition 1.2. The conditional mutual information of A w.r.t. D is CMIkD(A) = I(W ;U (k)|Z̃(k)). Intuitively, CMIkD(A) captures how well we can recognize which samples from the given supersample Z̃(k) were in the training set, given the learned parameters. This intuition and the connection of CMIkD(A) with the membership attack [20] can be formalized using Fano’s inequality, showing that CMIkD(A) can be used to lower bound the error of any estimator of U (k) given W and Z̃(k). (See Appendix A.) Steinke and Zakynthinou [21] connect CMIkD(A) with well-known notions in learning theory such as distributional stability, differential privacy, and VC dimension, and establish the following bound [21, Thm. 5.1] in the case k = 2, the extension to k ≥ 2 being straightforward: Theorem 1.3. EGED(A)≤ √ 2CMI kD(A) n . This paper improves our understanding of the framework introduced by Steinke and Zakynthinou [21], identifies tighter bounds, and applies these techniques to the analysis of a real algorithm. In Section 2, we present several formal connections between the two aforementioned informationtheoretic approaches for studying generalization. Our first result bridges IOMID(A) and CMIkD(A), showing that for any learning algorithm, any data distribution, and any k, CMIkD(A) is less that IOMID(A). We also show that CMIkD(A) converges to IOMID(A) as k→∞ when |W| is finite. In Section 3, we establish two novel bounds on generalization error using the random index and super sample structure of Steinke and Zakynthinou, and show that both our bounds are tighter than those based on CMIkD(A). Finally, in Section 4, we show how to construct generalization error bounds for noisy, iterative algorithms using the generalization bound proposed in Section 3. Using the Langevin dynamics algorithm as our example, we introduce a new type of prior for iterative algorithms that “learns” from the past trajectory, using a form of hypothesis testing, in order to not “pay” again for information obtained at previous iterations. Experiments show that our new bound is tighter than [14, 15], especially in the late stages of training, where the hypothesis test component of the bound discounts the contributions of new gradients. Our new bounds are non-vacuous for a great deal more epochs than related work, and do not diverge or exceed 1 even when severe overfitting occurs. 1.1 Contributions 1. We characterize the connections between the IOMID(A) and CMIkD(A). We show that CMIkD(A) is always less than the IOMID(A) for any data distribution, learning algorithms and k. Further, we prove that CMIkD(A) converges to IOMID(A) when k goes to infinity for finite parameter spaces. 2. We provide novel generalization bounds that relate generalization to the mutual information between learned parameters and a random subset of the random indices U1, . . .Un. 3. We apply our generalization bounds to the Langevin dynamics algorithm by constructing a specific generalized prior and posterior. We employ a generalized prior that learns about the values of the indices U from the optimization trajectory. To our knowledge, this is the first generalized prior that learns about the dataset from the iterates of the learning algorithm. 4. We show empirically that our bound on the expected generalization error of Langevin dynamics algorithm is tighter than other existing bounds in the literature. 1.2 Definitions from Probability and Information Theory Let S,T be measurable spaces, letM1(S) be the space of probability measures on S, and define a probability kernel from S to T to be a measurable map from S toM1(T ). For random elements X in S and Y in T , write P[X ] ∈M1(S) for the distribution of X and write PY [X ] for (a regular version of) the conditional distribution of X given Y , viewed as a σ(Y )-measurable random element inM1(S). Recall that PY [X ] is a regular version if, for some probability kernel κ from T to S, we have PY [X ] = κ(Y ) a.s. . If Y is σ(X)-measurable then Y is a function of X . If random measure, P, is σ(X)-measurable then the measure P is determined by X , but a random element Y with PX [Y ] = P is not X measurable unless it is degenerate. If X is a random variable, write EX for the expectation of X and write EY X or E[X |Y ] for (an arbitrary version of) the conditional expectation of X given Y , which is Y -measurable. For a random element X on S and a probability kernel P from S to T , the composition P(X) := P◦X is a σ(X)-measurable random measure of a random element taking values in T . We occasionally use this notation to refer to a kernel P implicitly by the way it acts on X . Let P, Q be probability measures on a measurable space S. For a P-integrable or nonnegative measurable function f , let P[ f ] = ∫ f dP. When Q is absolutely continuous with respect to P, denoted Q P, we write dQdP for the Radon–Nikodym derivative of Q with respect to P. We rely on several notions from information theory: The KL divergence of Q with respect to P, denoted KL(Q‖P), is Q[log dQdP ] when Q P and∞ otherwise. Let X , Y , and Z be random elements, and let⊗ form product measures. The mutual information between X and Y is I(X ;Y ) = KL(P[(X ,Y )]‖P[X ]⊗P[Y ]). The disintegrated mutual information between X and Y given Z, is1 IZ(X ;Y ) = KL(PZ [(X ,Y )]‖PZ [X ]⊗PZ [Y ]). The conditional mutual information of X and Y given Z is I(X ;Y |Z) = EIZ(X ,Y ). 2 Connections between IOMID(A) and CMIkD(A) In this section, we compare approaches for the information-theoretic analysis of generalization error, and we aim to unify the two main information-theoretic approaches for studying generalization. In Theorems 2.1 and 2.2 we will show that for any learning algorithm and any data distribution, CMIkD(A) provides a tighter measure of dependence than IOMID(A), and that one can recover IOMID(A)–based bounds from CMIkD(A) for finite parameter spaces. A fundamental difference between IOMID(A) and CMIkD(A) is that CMIkD(A) is bounded by n logk [21], while IOMID(A) can be infinite even for learning algorithms that provably generalize [6]. One of the motivations of Steinke and Zakynthinou was that proper empirical risk minimization algorithms over threshold functions on R have large IOMID(A) [4]. In contrast, some such algorithms have small CMIkD(A). Our first result shows that CMIkD(A) is never larger than IOMID(A). Theorem 2.1. For every k ≥ 2, I(W ;S) = I(W ; Z̃(k))+ I(W ;U (k)|Z̃(k)) and CMIkD(A)≤ IOMID(A). Next, we address the role of the size of the super-sample in CMI. In [21], CMI is defined using a super-sample of size 2n (k = 2) only. Our next result demonstrates that CMIkD(A) agree IOMID(A) in the limit as k→∞ when the parameter space is finite. Theorem 2.2. If the output of A takes value in a finite set then lim k→∞ CMIkD(A) = IOMID(A). Combining Theorems 1.3 and 2.2, we obtain EGED(A)≤ lim k→∞ √ 2CMIkD(A) n = √ 2IOMID(A) n , (1) when the parameter space is finite. Comparing Eq. (1) with Theorem 1.1 we observe that Eq. (1) is twice as large. In Theorem B.1, we present a refined bound based on CMIkD(A) which asymptotically match Theorem 1.1. The proofs of the results of this section appear in Appendix C. 1 Letting φ satisfy φ(Z) = IZ(X ;Y ) a.s., define I(X ,Y |Z = z) = φ(z). This notation is necessarily well defined only up to a null set under the marginal distribution of Z. 3 Sharpened Bounds based on Individual Samples We now present two novel generalization bounds and show they provide a tighter characterization of the generalization error than Theorem 1.3. The results are inspired by the improvements on IOMID(A) due to Bu et al. [6]. In particular, Theorem 3.1 bounds the expected generalization error in terms of the mutual information between the output parameter and a random subsequence of the indices U (2), given the super-sample. Theorem 3.4 provides a generalization bound in terms of the disintegrated mutual information between each individual element of U (2) and the output of the learning algorithm, W . The bound in Theorem 3.4 is an analogue of [6, Prop. 1] for Theorem 1.3. In this section as in Steinke and Zakynthinou [21], we only consider Z̃(k) and U (k) with k = 2, so we will drop the superscript from U (k). Let U = (U1, . . . ,Un). The proofs for the results of this section appear in Appendix D. Theorem 3.1. Fix m ∈ [n] and let J = (J1, . . . ,Jm) be a random subset of [n], distributed uniformly among all subsets of size m and independent from W, Z̃(2), and U. Then EGED(A)≤ E √ 2IZ̃(2)(W ;UJ |J) m . (2) By applying Jensen’s inequality to Theorem 3.1, we obtain EGED(A)≤ √ 2I(W ;UJ |Z̃(2),J) m . (3) Our next results in Theorem 3.2 let us compare Eq. (3) for different values of m = |J|. Theorem 3.2. Let m1 < m2 ∈ [n], and let J(m1),J(m2) be random subsets of [n], distributed uniformly among all subsets of size m1 and m2, respectively, and independent from W, Z̃(2), and U. Then I(W ;UJ(m1) |Z̃(2),J(m1)) m1 ≤ I(W ;UJ(m2) |Z̃(2),J(m2)) m2 . (4) Consequently, taking m2 = n, for all 1≤ m1 ≤ n E √ 2 IZ̃(2)(W ;UJ(m1) |J(m1)) m1 ≤ √ 2I(W ;U |Z̃(2)) n . (5) Corollary 3.3. EGED(A) ≤ √ 2 I(W ;UJ |Z̃(2),J)/m. The case m = |J| = n is equivalent to Theorem 1.3. The bound is increasing in m ∈ [n], and, the tightest bound is achieved when m = |J|= 1. Also, Eq. (5) shows our bound in Theorem 3.1 is tighter than Theorem 1.3 for k = 2. To further tighten Theorem 3.2 when m = 1, we show that we can pull the expectation over both Z̃(2) and J outside the concave square-root function. Theorem 3.4. Let J ∼ Unif([n]) (i.e., m = 1 above) be independent from W, Z̃(2), and U. Then EGED(A)≤E √ 2IZ̃(2),J(W ;UJ) = 1 n n ∑ i=1 E √ 2IZ̃(2)(W ;Ui). (6) Remark 3.5. Theorem 3.4 is tighter than Theorem 1.3 since 1 n n ∑ i=1 E √ 2IZ̃(2)(W ;Ui)≤ √ n ∑ i=1 2 n I(W ;Ui|Z̃(2))≤ √ 2 n I(W ;U |Z̃(2)) (7) The first inequality is Jensen’s, while the second follows from the independence of indices Ui. / 3.1 Controlling CMI bounds using KL Divergence It is often difficult to compute MI directly. One standard approach in the literature is to bound MI by the expectation of the KL divergence of the conditional distribution of the parameters given the data (the “posterior”) with respect to a “prior”. The statement below is adapted from Negrea et al. [15]. Lemma 3.6. Let X, Y , and Z be random elements. For all σ(Z)-measurable random probability measures P on the space of Y , IZ(X ;Y )≤ EZ [KL(PX ,Z [Y ]‖P)] a.s., with a.s. equality for P = EZ [PX ,Z [Y ]] = PZ [Y ]. We refer to the conditional law of W given S as the “posterior" of W given S, which we denote Q = PS[W ] = PZ̃(2),U [W ], and to P as the prior. This can be used in combination with, for example, Lemma 3.6 and Theorem 1.3 to obtain that for any Z̃(2)-measurable random prior P(Z̃(2)) EGED(A)≤ √ 2 I(W ;U |Z̃(2)) n ≤ √ 2E[KL(Q‖P(Z̃(2)))] n . (8) Note that the prior only has access to Z̃(2), therefore from its perspective the training set can take 2n different values. Alternatively, combining Lemma 3.6 and Theorem 3.1 yields EGED(A)≤ E √ 2 EZ̃(2)IZ̃(2)(W ;UJ |UJc ,J) m ≤ E √ 2EZ̃(2) [KL(Q‖P(Z̃(2),UJc ,J))] m . (9) In Eq. (9) the prior has access to n−m samples in the training set, SJc , because Z̃(2)UJc = SJc . However, since Z̃(2) is known to the prior, the training set can take only 2m distinct values from the point of view of the prior in Eq. (9). This is a significant reduction in the amount of information that can be carried by the indexes in UJ about the output hypothesis. Consequently, priors can be designed to better exploit the dependence of the output hypothesis and the index set. 3.2 Tighter Generalization bound for the case m = 1 Since the strategy above controls MI-based expressions via KL divergences, one may ask whether a bound derived with similar tools, but directly in terms of KL, can be tighter than the combination Lemma 3.6 and Theorem 3.1. The following result shows that for m = 1 a tighter bound can be derived by pulling the expectation over both UJc and J outside the concave square-root function. Theorem 3.7. Let J ∼ Unif([n]) be independent from W, U, and Z̃(2). Let Q = PZ̃(2),U [W ] and P be a σ(Z̃(2),UJc ,J)-measurable random probability measure. Then EGED(A)≤ E √ 2 KL(Q‖P). (10) Here, the KL divergence is between two σ(Z̃(2),J,U)-measurable random measures, so is random. 4 Generalization bounds for noisy, iterative algorithms We apply this new class of generalization bounds to non-convex learning. We analyze the Langevin dynamics (LD) algorithm [8], following the analysis pioneered by Pensia et al. [16]. The example we set here is a blueprint for building bounds for other iterative algorithms. Our approach is similar to the recent advances by Li et al. [14] and Negrea et al. [15], employing data-dependent estimates to obtain easily simulated bounds. We find our new results allow us to exploit past iterates to obtain tighter bounds. The influence of past iterates is seen to take the form of a hypothesis test. 4.1 Bounding Generalization Error via Hypothesis Testing The chain rule for KL divergence is a key ingredient of information-theoretic generalization error bounds for iterative algorithms [6, 14, 15, 16]. W{0,...,T} denotes the space of parameters generated by an iterative algorithm in T iterations. For any measure, ν , onW{0,...T}, and W ∼ ν , let ν0 denote the marginal law of W0, and νt| denote the conditional law of Wt given W0 . . .Wt−1. Lemma 4.1 (Chain Rule for KL). Let Q,P be probability measures onW{0,...,T} with Q0 = P0. The following lemma bounds the KL divergence involving the posterior for the terminal parameter with one involving the sum of the KL divergences over each individual step of the trajectory. Then KL(QT ‖PT )≤ KL(Q‖P) = ∑Tt=1 Q0:(t−1)[KL(Qt| ‖Pt|)] The benefit of using the chain rule to analyze the iterative algorithm are two-fold: first, we gain analytical tractability; many bounds that appear in the literature implicitly require this form of incrementation [6, 14, 15, 16]. Second, and novel to the present work, the information in the optimization trajectory can be exploited to identify U from the history of W . In order to understand how the prior may take advantage of information from the optimization trajectory, consider applying Lemma 4.1 to the KL term in Eq. (9). We have KL(QT ‖PT ( Z̃(2),UJc ,J ) )≤ T ∑ t=1 EZ̃ (2),UJc ,J [KL(Qt| ‖Pt| ( Z̃(2),UJc ,J ) )]. Here Pt| ( Z̃(2),UJc ,J ) is a σ(Z̃(2),UJc ,J,W0:t−1)-measurable random probability measure. The prior may use UJc , Z̃(2), and J to reduce the number of possible values that U can take to 2|J|. Moreover, since UJ is constant during optimization, W0,W1,W2, . . .Wt−1 may leak some information about UJ , and the prior can use this information to tighten the bound by choosing a Pt| that achieves small KL(Qt| ‖Pt|). In the special case where the prior can perfectly estimate UJ from W0,W1,W2, . . .Wt−1, we can set Pt| = Qt| and KL(Qt| ‖Pt|) will be zero. As will be seen in the next subsection, we can explicitly design a prior that uses the information in the optimization trajectory for the LD algorithm. The process by which the prior can learn from the trajectory can be viewed as an online hypothesis test, or binary decision problem, where the prior at time t allocates belief between 2m possible explanations, given by the possible values of UJ , based on the evidence provided by W0, . . .Wt . If the prior is able to identify UJ based on the W s then the bound stops accumulating, even if the gradients taken by subsequent training steps are large. This means that penalties for information obtained later in training are discounted based on the information obtained earlier in training. 4.2 Example: Langevin Dynamics Algorithm for Non-Convex Learning We apply these results to obtain generalization bounds for a gradient-based iterative noisy algorithm, the Langevin Dynamics (LD) algorithm. For classification with continuous parameters, the 0-1 loss does not provide useful gradients. Typically we optimize a surrogate objective, based on a surrogate loss, such as cross entropy. Write ˜̀ :Z×W→R for the surrogate loss and let R̃S(w) = 1n ∑ n i=1 ˜̀(Zi,w) be the empirical surrogate risk. Let ηt be the learning rate at time t, βt the inverse temperature at time t and let εt be sampled i.i.d. from N (0,Id). Then the LD algorithm iterates are given by Wt+1 =Wt −ηt∇R̃S(Wt)+ √ 2ηt βt εt . (11) The prior We will take m = 1, and construct a bespoke σ(Z̃(2),UJc ,J)-measurable prior for this problem in order to apply Theorem 3.7. The prior is based on a decision function, θ : R→ [0,1], which at each time t +1 takes in a σ(W0 . . .Wt)-measurable test statistic, ∆Yt , and returns a degree of belief in favor of the hypothesis UJ = 1 over UJ = 2. The prior predicts an LD step by replacing the unknown (to the prior) contribution to the gradient of the data point at index J with a θ̂t = θ(∆Yt)weighted average of the gradients due to each candidate {Z1,J ,Z2,J}.The conditional law of the tth iterate under the prior is a σ(Z̃(2),UJc ,J,W0, . . .Wt)-measurable random measure, as required. The exact value of the test statistic is ∆Yt = Yt,2−Yt,1, here the Y0,1 = Y0,2 = 0 and Yt,u are defined by the formula in Eq. (13). The conditional law of the tth iterate under the prior is described by Wt+1 =Wt − ηtn ( ∑ni=1 i6=J ∇ ˜̀(Zi,Wt)+ θ̂t∇ ˜̀(Z1,J ,Wt)+(1− θ̂t)∇ ˜̀(Z2,J ,Wt) ) + √ 2ηt βt εt . (12) The test statistic chosen is based on the log-likelihood-ratio test statistic for the independent mean 0 Gaussian random vectors (εs)ts=1, which is well known to be uniformly most powerful for the binary discrimination of means. Natural choices for θ are symmetric CDFs, since they treat possible values of U symmetrically, and are monotone in the test statistic. We define the two-sample incoherence at time t by ζt =∇ ˜̀(Z1,J ,Wt)−∇ ˜̀(Z2,J ,Wt). Θ denotes the set of measurable θ : R→ [0,1]. Y0,1 =Y0,2 = 0, and for t ≥ 1, Yt,1 and Yt,2 are given by (for u ∈ {1,2}) Yt,u , t ∑ i=1 βi−1 4ηi−1 ‖Wi−Wi−1 +ηi−1 n−1 n ∇R̃SJc (Wi−1)+ ηi−1 n ∇ ˜̀(Zu,J ,Wi−1)‖2. (13) Theorem 4.2 (Generalization bound for LD algorithm). Let {Wt}t∈[T ] denote the iterates of the LD algorithm. If `(Z,w) is [0,1]-bounded then E [ RD(WT )− R̂S(WT ) ] ≤ 1 n √ 2 inf θ∈Θ E √ T−1 ∑ t=0 EZ̃(2),U,Jβtηt‖ζt‖2 ( 1{UJ = 1}−θ (Yt,2−Yt,1) )2 . (14) Remark 4.3. For θ ∈Θ with 1−θ(x) = θ(−x), Eq. (14) simplifies to E [ RD(WT )− R̂S(WT ) ] ≤ 1 n √ 2 E √ T−1 ∑ t=0 EZ̃(2),U,Jβtηt‖ζt‖2θ 2 ( −1UJ (Yt,2−Yt,1) ) . (15) For instance θ(x) = 12 + 1 2 tanh(x) and θ(x) = 1 2 + 1 2 sign(x) satisfy 1−θ(x) = θ(−x). / Remark 4.4. By the law of total expectation, for any θ ∈Θ, EGED(A)≤ 12√2nE [V1 +V2] , where Vu , √ T−1 ∑ t=0 EZ̃(2),UJc ,J,UJ=uβtηt‖ζt‖2 (1{u = 1}−θ (Yt,2−Yt,1))2, u ∈ {1,2}. (16) To estimate Vu (u∈{1,2}) for fixed J, the training set is Su = {Z1, . . . ,ZJ−1, Z̃u,J ,ZJ+1, . . . ,Zn}. Hence V1, V2 can be simulated from just n+1 data points ( Z1, . . . ,ZJ−1,ZJ+1, . . . ,Zn, Z̃1,J , Z̃2,J ) ∼ D⊗(n+1). / The generalization bound in Eq. (14) does not place any restrictions on the learning rate or Lipschitz continuity of the loss or its gradient. In the next corollary we study the asymptotic properties of the bound in Eq. (14) when ˜̀ is L-Lipschitz. Then, we draw a comparison between the bound in this paper and some of the existing bounds in the literature. Corollary 4.5. Under the assumption that ˜̀ is L-Lipschitz, we have ‖ζt‖ ≤ 2L. Then, the generalization bound in Eq. (14) can be upper-bounded as E(RD(WT )−RS(WT ))≤ √ 2L n inf θ∈Θ E √ T−1 ∑ t=0 EZ̃(2),U,Jβtηt (1{UJ = 1}−θ (Yt,2−Yt,1))2. (17) Remark 4.6. Under an L-Lipschitz assumption, for the LD algorithm, Li et al. [14, Thm. 9] have E [RD(WT )−RS(WT )]≤ √ 2L n √ ∑T−1t=0 βtηt . (18) We immediately see that Eq. (17) provides a constant factor improvement over Eq. (18) by naïvely using θ : x 7→ 1/2. Our bound has order-wise improvement with respect to n over that of Bu et al. [6] and Pensia et al. [16] under the L-Lipschitz assumption. Negrea et al. [15, App. E.1] obtain E [RD(WT )−RS(WT )]≤ L2(n−1) √ ∑T−1t=0 βtηt . (19) which is a constant factor better than our bound for the choice θ : x 7→ 1/2. However, this θ essentially corresponds to no hypothesis test, yielding the same prior as in [15]. For more sophisticated choices of decision function (θ ), even under a Lipschitz-surrogate loss assumption, it is difficult to compare our bound with related work because the exact impact of θ -discounting is difficult to quantify analytically. / Remark 4.7. A prevailing method for analyzing the generalization error in [6, 14, 15, 16] for iterative algorithms is via the chain rule for KL, using priors for the joint distribution of weight vectors that are Markov, i.e., given the tth weight, the (t+1)th weight is conditionally independent from the trajectory so far. Existing results using this approach accumulate a "penalty" for each step. In [6, 14, 15], the penalty terms are, respectively, the squared Lipschitz constant, the squared norm of the gradients, and the trace of the minibatch gradient covariance. The penalty term in our paper is the squared norm of "two-sample incoherence", defined in Theorem 4.2 as the squared norm of the difference between the gradient of a randomly selected training point and the held-out point. However, the use of chain rule along with existing “Markovian” priors introduces a source of looseness, i.e., the accumulating penalty may diverge to +∞ yielding vacuous bounds (as seen in Fig. 1). The distinguishing feature of our data-dependent CMI analysis is that the penalty terms get “filtered” by the online hypothesis test via our non-Markovian prior, i.e., our prediction for t +1 depends on whole trajectory. When the true index can be inferred from the previous weights, then the penalty essentially stops accumulating. / 4.2.1 Empirical Results In order to better understand the effect of discounting and the degree of improvement due to our new bounds and more sophisticated prior, we turn to simulation studies. We present and compare the empirical evaluations of the generalization bound in Theorem 4.2 with the data-dependent generalization bounds in Li et al. [14] and Negrea et al. [15]. For brevity, many of the details behind our implementation are deferred to Appendix G. The functional form of our bounds and [14, 15] are nearly identical as all of them use the chain rule for KL divergence. Nevertheless, the summands appearing in the bounds are different. The bound in [14] depends on the squared surrogate loss gradients norm, and the generalization bound in Negrea et al. [15] depends on the squared norm of training set incoherence defined as ‖∇ ˜̀(ZJ ,Wt)− 1n−1 ∑i∈[n],i6=J∇ ˜̀(Zi,Wt)‖2 where the training set is {Z1, . . . ,Zn} and J ∼ Unif([n]). The first key difference between our bound and others is that the summand in our bound consists of two terms: squared norm of the two-sample incoherence, i.e., ‖ζt‖2, and the squared error probability of a hypothesis test at time t, given by the term ( 1{UJ = 1}−θ ( ∑ti=0 (Yi,2−Yi,1) ))2 in our bound. A consequence of this, and the second fundamental difference between our bound and existing bounds, is that our bound exhibits a trade-off in ‖ζt‖2 because large ‖ζt‖2 will make the error of the hypothesis test small on future iterations, whereas the bounds in [14, 15] are uniformly increasing with respect to the squared norm of surrogate loss gradients and the training set incoherence, respectively. In this section we empirically evaluate and compare our bound with related work across various neural network architectures and datasets. Using Monte Carlo (MC) simulation, we compared estimates of our expected generalization error bounds with estimates of the bound from [14, 15] for the MNIST [13], CIFAR10 [12], and FashionMNIST [23] datasets in Fig. 1 and Table 1. For all the plots we consider θ(x) = 12 (1+erf(x)) for our bound. Also, in the last row of Table 1, we report the unbiased estimate of our bound optimized over the choice of θ function. We plot the squared norm of the two sample incoherence and training set incoherence, as well as the squared error probability of the hypothesis test. Fig. 1 and Table 1 show that our bound is tighter, and remain non-vacuous after many more iterations. We also observe that the variances for MC estimates of our bound are smaller than those of Negrea et al. [15], and it is also smaller than Li et al. [14] for CIFAR10 and MNIST-CNN experiments. Moreover, we observe that the error probability of the hypothesis test decays with the number of iterations, which matches the intuition that, as one observes more noisy increments of the process, one is more able to determine which point is contributing to the gradient. For CIFAR10, ‖ζt‖2 is large because the generalization gap is large. However, as mentioned in the beginning of this section, large ‖ζt‖2 makes the hypothesis testing easier on subsequent iterations. For instance, after iteration 600 the error is vanishingly small for CIFAR10 experiments which results in a plateau region in the bound. We can also observe the same phenomenon for the Fashion-MNIST experiment. This property distinguishes our bound from those in [14, 15]. Results for MNIST with CNN demonstrate that ‖ζt‖2 and training set incoherence are close to each other. The reason behind this observations is that the generalization gap is small. Also, for this experiment the performance of the hypothesis testing is only slightly better than random guessing since the generalization gap is small, and it is difficult to distinguish the training samples from the test samples. This observation explains why our generalization bound is close to that of [15]. Nevertheless, the hypothesis testing performance improves with more training iterations, leading the two bounds to diverge, with our new bound performing better at later iterations. Finally, the scaling of our bound with respect to the number of iteration is tighter than in the bounds in [14, 15] as can be seen in Fig. 1. 0 100 200 300 400 500 600 700 800 900 0.0 0.2 0.4 0.6 0.8 1.0 M N IS T w it h M LP Expected generalization error Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 100 200 300 400 500 600 700 800 900 60 80 100 120 140 160 180 Squared norm of the incoherence CMI bound (ours) Negrea et al. (2019) 0 100 200 300 400 500 600 700 800 900 0.14 0.16 0.18 0.20 0.22 0.24 0.26 Squared error probability of HT 0 100 200 300 400 500 600 700 0.0 0.2 0.4 0.6 0.8 1.0 M N IS T w it h C N N Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 100 200 300 400 500 600 700 200 400 600 800 1000 CMI bound (ours) Negrea et al. (2019) 0 100 200 300 400 500 600 700 0.20 0.22 0.24 0.26 0.28 0 260 520 780 1040 1300 0.0 0.2 0.4 0.6 0.8 1.0 F -M N IS T w it h C N N Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 260 520 780 1040 1300 200 400 600 800 1000 1200 1400 CMI bound (ours) Negrea et al. (2019) 0 260 520 780 1040 1300 0.09 0.13 0.17 0.21 0.25 0.29 2× 10−1 0 230 460 690 920 1150 1380 1610 1840 2070 2300 0.0 0.2 0.4 0.6 0.8 1.0 C IF A R w it h C N N Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 230 460 690 920 1150 1380 1610 1840 2070 2300 0 500 1000 1500 2000 2500 3000 CMI bound (ours) Negrea et al. (2019) 0 230 460 690 920 1150 1380 1610 1840 2070 2300 10−4 10−3 10−2 10−1 Figure 1: Numerical results for various datasets and architectures. All the x-axes represent the training iteration. The plots in the first column depict a Monte Carlo estimate of our bounds with that of Li et al. [14] and Negrea et al. [15]. The plots in the second column compare the mean of the training set incoherence in [15] with the two-sample incoherence in our bound. Finally, the plots in the third column show the mean of the squared error probability of the hypothesis testing performed by the proposed prior in our bound. MNIST-MLP MNIST-CNN CIFAR10-CNN FMNIST-CNN Training error 4.33±0.01% 2.59±0.01% 9.39±0.36% 7.96±0.03% Generalization error 0.88±0.01% 0.55±0.01% 32.89±0.44% 3.71±0.03% Negrea et al. [15] 67.93±16.25% 20.98±5.01% 4112.63±567.08% 82.89±12.64% Li et al. [14] 600.29±1.99% 245.03±2.37% 20754.32±75.95% 598.62±3.21% CMI (Ours) 44.65±4.27% 16.51±1.41% 71.76±4.82% 48.01±4.22% CMI-OPT(Ours) 39.06±5.52% 13.24±1.53% 63.00±5.97% 41.17±5.85% Table 1: Summary of the results. The generalization bounds are reported at the end of training. Acknowledgments The authors would like to thank Blair Bilodeau and Yasaman Mahdaviyeh for feedback on drafts of this work, and Shiva Ketabi for helpful discussions on the implementation of the bounds. Funding MH is supported by the Vector Institute. JN is supported by an NSERC Vanier Canada Graduate Scholarship, and by the Vector Institute. DMR is supported by an NSERC Discovery Grant and an Ontario Early Researcher Award. This research was carried out in part while MH, JN, GKD, and DMR were visiting the Institute for Advanced Study. JN’s visit to the Institute was funded by an NSERC Michael Smith Foreign Study Supplement. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/partners. Broader Impact This work builds upon the community’s understanding of generalization error for machine learning methods. This has a positive impact on the scientific advancement of the field, and may lead to further improvements in our understanding, methodologies and applications of machine learning and AI. While there are not obvious direct societal implications of the present work, the indirect and longer term impact on society may be positive, negative or both depending on how, where and for what machine learning method that will have benefited from our research are used in the future.
1. What is the main contribution of the paper in the field of information-theoretic generalization bounds? 2. Can you explain the significance of the paper's tighter CMI-based bounds compared to previous works? 3. How does the paper's approach differ from other learning-theoretic tools in deep learning? 4. Can you provide examples or scenarios where the paper's bound would be particularly useful or relevant? 5. How does the paper's prior design for noisy, iterative algorithms contribute to its overall impact? 6. Are there any potential limitations or areas for improvement in the paper's approach or findings?
Summary and Contributions Strengths Weaknesses
Summary and Contributions I'd like to begin with the disclaimer that I'm only superficially familiar with information-theoretic generalization bounds. However I'm quite familiar with other learning-theoretic tools and the area of generalization bounds in deep learning. ----------------- Short summary --------------- This paper builds on a line of work studying information-theoretic generalization bounds. The paper first begins by relating the recent Conditional Mutual Information-based bounds by S&Z'20 to the Mutual Information-based bounds by R&Z'16/X&R'17. Next, the paper establishes general, tighter CMI-based bounds inspired by other recent ideas from Negrea et al., '19 & Bu et al., '19. Finally, these bounds are instantiated for the Langevin Dynamics algorithm where these bounds are then empirically demonstrated to be tighter. More details: ----------------- a) The paper first shows that the CMI term in S&Z'20 is always <= the mutual information term in R&Z'16/X&R'17. They also show an equality when the super-sample size used in CMI tends to infinity. b) Next, inspired by Bu et al., '19., the paper presents a more general version of the S&Z'20. In S&Z'20, one computes I( W; U | Z) where Z is the super-sample of n+n datapoints and U consists of n 0/1 values corresponding which half of Z each of the datapoints in the training set S come from. This paper's bound is based on I(W; U_{J} | Z, J) where J is a random subset of {1, ... n} of cardinality m (and when m=n, we get the original bound). This general version is shown to be tighter than S&Z'20; the tightest bound achieved when m=1. c) In order to make the above bound practically computable, the paper employs an idea from Negrea et al., '19 to bound the CMI in terms of the KL divergence between a prior and posterior. -- This form of the bound also gives some idea as to why this paper's bound is tighter than what you'd get from a KL-version of the original CMI bound: the prior in this bound can be data-dependent in that it can be "informed by" the n-m datapoints in S not indexed by J (i.e., S_{J^c}). d) Finally, the paper applies this KL divergence bound for noisy, iterative algorithms. Like is standard in this line of work, the KL divergence term is split into a chain of KL divergence terms corresponding to each step of the iteration. A crucial contribution here is to cleverly design the the prior for each of these steps for the specific LD algorithm, within the above framework. --- In more detail: under the setting where m=1, the paper presents a prior for any time t as a convex combination of the two different weights that one would reach starting from W_{t-1} under the two possible choices for the J'th datapoint. The exact ratio of this combination is "informed by" looking W_1, W_2 ... W_{t-1} and gauging which of the two choices in the supersample is really in the training set. e) The above bound is numerically computed for a variety of settings (MNIST+MLP, MNIST+CNN, F-MNIST+CNN, CIFAR+CNN), and is shown to be non-vacuous compared to the bounds of Negrea et al., '19 and Li et al., '20. Also, interestingly, these bounds saturate with the number of timesteps unlike the other existing bounds (which is also what was intuitively expected from the fact that the priors in later timesteps can be more informed.) Strengths 1. The paper has built on a multitude of recent ideas to advance our understanding of information-theoretic generalization bounds. 2. Although these ideas are inspired by recent papers, it is just as valuable to identify which existing ideas can be combined, how to combine them and actually show when the combinations works and in what aspects it is better. 3. Overall, this exploration has resulted in multiple solid, valuable and novel contributions ranging from foundational ones (reg. the CMI bounds, and the new generalization bound) to more empirical/applied ones (instantiating the bound for LD, and showing that it's better than existing bounds). 4. I appreciate the fact that the paper presents "a complete story" in that it presents a general tool and also applies it. 5. I think it's also exciting to see that empirically the resulting bound saturates with iteration count, unlike existing bounds which increase as we train the network further and further. This is an important aspect that one'd want from a generalization bound; further, it seems like the idea of fixing a super-sample and focusing on a subset of it, is critical to achieving this. Weaknesses I don't have any negative feedback on this paper. I think this is a nice paper definitely worth publishing. 6. I do have some suggestions regarding the clarity (noted under the question on clarity), but I must add some of those might simply be because of my lack of expertise in information-theoretic generalization bounds.
NIPS
Title Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy, Iterative Algorithms Abstract The information-theoretic framework of Russo and Zou (2016) and Xu and Raginsky (2017) provides bounds on the generalization error of a learning algorithm in terms of the mutual information between the algorithm’s output and the training sample. In this work, we study the proposal by Steinke and Zakynthinou (2020) to reason about the generalization error of a learning algorithm by introducing a super sample that contains the training sample as a random subset and computing mutual information conditional on the super sample. We first show that these new bounds based on the conditional mutual information are tighter than those based on the unconditional mutual information. We then introduce yet tighter bounds, building on the “individual sample” idea of Bu et al. (2019) and the “data dependent” ideas of Negrea et al. (2019), using disintegrated mutual information. Finally, we apply these bounds to the study of the Langevin dynamics algorithm, showing that conditioning on the super sample allows us to exploit information in the optimization trajectory to obtain tighter bounds based on hypothesis tests. 1 Introduction Let D be an unknown distribution on a space Z , and letW be a set of parameters that index a set of predictors, ` : Z ×W → [0,1] be a bounded loss function. Consider a (randomized) learning algorithm A that selects an element W inW , based on an IID sample S = (Z1, . . . ,Zn)∼D⊗n. For w ∈W , let RD(w) = E`(Z,w) denote the risk of predictor w, and R̂S(w) = 1n ∑ m i=1 `(Zi,w) denote the empirical risk. Our interest in this paper is the (expected) generalization error of A with respect to D, EGED(A) = E[RD(W )− R̂S(W )]. In this work, we study bounds on generalization error in terms of information-theoretic measures of dependence between the data and the output of the learning algorithm. This approach was initiated by Russo and Zou [18, 19] and has since been extended [2, 3, 6, 9, 17, 24]. The basic result in this line of work is that the generalization error can be bounded in terms of the mutual information I(W ;S) between the data and the learned parameter, a quantity that has been called the information usage or input–output mutual information of A with respect to D, which we denote by IOMID(A). The following result is due to Russo and Zou [18] and Xu and Raginsky [24]. Theorem 1.1. EGED(A)≤ √ IOMID(A) 2n . 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Theorem 1.1 formalizes the intuition that a learning algorithm without heavy dependence on the training set will generalize well. This result has been extended in many directions: Raginsky et al. [17] connect variants of IOMID(A) to different notions of stability. Asadi et al. [3] establish refined bounds using chaining techniques for subgaussian processes. Bu et al. [6] obtain a tighter bound by replacing IOMID(A) with the mutual information between W and a single training data point. Negrea et al. [15] propose variants that allow for data-dependent estimates. See also [1, 2, 4, 9, 10]. Our focus in this paper is on a new class of information-theoretic bounds on generalization error, proposed by Steinke and Zakynthinou [21]. Fix k ≥ 2, let [k] = {1, . . . ,k}, let U (k) = (U1, . . . ,Un) ∼ Unif([k]n), and let Z̃(k) ∼ D⊗(k×n) be a k×n array of IID random elements in Z , independent from U (k). Let S = ( ZU1,1, . . . ,ZUn,n ) and let W be a random element inW such that conditional on S, U (k), and Z̃(k), W has distribution A(S). It follows that, conditional on S, W is independent from U (k) and Z̃(k). By construction, the data set S is hidden inside the super sample; the indices U (k) specify where. Steinke and Zakynthinou [21] use these additional structures to define: Definition 1.2. The conditional mutual information of A w.r.t. D is CMIkD(A) = I(W ;U (k)|Z̃(k)). Intuitively, CMIkD(A) captures how well we can recognize which samples from the given supersample Z̃(k) were in the training set, given the learned parameters. This intuition and the connection of CMIkD(A) with the membership attack [20] can be formalized using Fano’s inequality, showing that CMIkD(A) can be used to lower bound the error of any estimator of U (k) given W and Z̃(k). (See Appendix A.) Steinke and Zakynthinou [21] connect CMIkD(A) with well-known notions in learning theory such as distributional stability, differential privacy, and VC dimension, and establish the following bound [21, Thm. 5.1] in the case k = 2, the extension to k ≥ 2 being straightforward: Theorem 1.3. EGED(A)≤ √ 2CMI kD(A) n . This paper improves our understanding of the framework introduced by Steinke and Zakynthinou [21], identifies tighter bounds, and applies these techniques to the analysis of a real algorithm. In Section 2, we present several formal connections between the two aforementioned informationtheoretic approaches for studying generalization. Our first result bridges IOMID(A) and CMIkD(A), showing that for any learning algorithm, any data distribution, and any k, CMIkD(A) is less that IOMID(A). We also show that CMIkD(A) converges to IOMID(A) as k→∞ when |W| is finite. In Section 3, we establish two novel bounds on generalization error using the random index and super sample structure of Steinke and Zakynthinou, and show that both our bounds are tighter than those based on CMIkD(A). Finally, in Section 4, we show how to construct generalization error bounds for noisy, iterative algorithms using the generalization bound proposed in Section 3. Using the Langevin dynamics algorithm as our example, we introduce a new type of prior for iterative algorithms that “learns” from the past trajectory, using a form of hypothesis testing, in order to not “pay” again for information obtained at previous iterations. Experiments show that our new bound is tighter than [14, 15], especially in the late stages of training, where the hypothesis test component of the bound discounts the contributions of new gradients. Our new bounds are non-vacuous for a great deal more epochs than related work, and do not diverge or exceed 1 even when severe overfitting occurs. 1.1 Contributions 1. We characterize the connections between the IOMID(A) and CMIkD(A). We show that CMIkD(A) is always less than the IOMID(A) for any data distribution, learning algorithms and k. Further, we prove that CMIkD(A) converges to IOMID(A) when k goes to infinity for finite parameter spaces. 2. We provide novel generalization bounds that relate generalization to the mutual information between learned parameters and a random subset of the random indices U1, . . .Un. 3. We apply our generalization bounds to the Langevin dynamics algorithm by constructing a specific generalized prior and posterior. We employ a generalized prior that learns about the values of the indices U from the optimization trajectory. To our knowledge, this is the first generalized prior that learns about the dataset from the iterates of the learning algorithm. 4. We show empirically that our bound on the expected generalization error of Langevin dynamics algorithm is tighter than other existing bounds in the literature. 1.2 Definitions from Probability and Information Theory Let S,T be measurable spaces, letM1(S) be the space of probability measures on S, and define a probability kernel from S to T to be a measurable map from S toM1(T ). For random elements X in S and Y in T , write P[X ] ∈M1(S) for the distribution of X and write PY [X ] for (a regular version of) the conditional distribution of X given Y , viewed as a σ(Y )-measurable random element inM1(S). Recall that PY [X ] is a regular version if, for some probability kernel κ from T to S, we have PY [X ] = κ(Y ) a.s. . If Y is σ(X)-measurable then Y is a function of X . If random measure, P, is σ(X)-measurable then the measure P is determined by X , but a random element Y with PX [Y ] = P is not X measurable unless it is degenerate. If X is a random variable, write EX for the expectation of X and write EY X or E[X |Y ] for (an arbitrary version of) the conditional expectation of X given Y , which is Y -measurable. For a random element X on S and a probability kernel P from S to T , the composition P(X) := P◦X is a σ(X)-measurable random measure of a random element taking values in T . We occasionally use this notation to refer to a kernel P implicitly by the way it acts on X . Let P, Q be probability measures on a measurable space S. For a P-integrable or nonnegative measurable function f , let P[ f ] = ∫ f dP. When Q is absolutely continuous with respect to P, denoted Q P, we write dQdP for the Radon–Nikodym derivative of Q with respect to P. We rely on several notions from information theory: The KL divergence of Q with respect to P, denoted KL(Q‖P), is Q[log dQdP ] when Q P and∞ otherwise. Let X , Y , and Z be random elements, and let⊗ form product measures. The mutual information between X and Y is I(X ;Y ) = KL(P[(X ,Y )]‖P[X ]⊗P[Y ]). The disintegrated mutual information between X and Y given Z, is1 IZ(X ;Y ) = KL(PZ [(X ,Y )]‖PZ [X ]⊗PZ [Y ]). The conditional mutual information of X and Y given Z is I(X ;Y |Z) = EIZ(X ,Y ). 2 Connections between IOMID(A) and CMIkD(A) In this section, we compare approaches for the information-theoretic analysis of generalization error, and we aim to unify the two main information-theoretic approaches for studying generalization. In Theorems 2.1 and 2.2 we will show that for any learning algorithm and any data distribution, CMIkD(A) provides a tighter measure of dependence than IOMID(A), and that one can recover IOMID(A)–based bounds from CMIkD(A) for finite parameter spaces. A fundamental difference between IOMID(A) and CMIkD(A) is that CMIkD(A) is bounded by n logk [21], while IOMID(A) can be infinite even for learning algorithms that provably generalize [6]. One of the motivations of Steinke and Zakynthinou was that proper empirical risk minimization algorithms over threshold functions on R have large IOMID(A) [4]. In contrast, some such algorithms have small CMIkD(A). Our first result shows that CMIkD(A) is never larger than IOMID(A). Theorem 2.1. For every k ≥ 2, I(W ;S) = I(W ; Z̃(k))+ I(W ;U (k)|Z̃(k)) and CMIkD(A)≤ IOMID(A). Next, we address the role of the size of the super-sample in CMI. In [21], CMI is defined using a super-sample of size 2n (k = 2) only. Our next result demonstrates that CMIkD(A) agree IOMID(A) in the limit as k→∞ when the parameter space is finite. Theorem 2.2. If the output of A takes value in a finite set then lim k→∞ CMIkD(A) = IOMID(A). Combining Theorems 1.3 and 2.2, we obtain EGED(A)≤ lim k→∞ √ 2CMIkD(A) n = √ 2IOMID(A) n , (1) when the parameter space is finite. Comparing Eq. (1) with Theorem 1.1 we observe that Eq. (1) is twice as large. In Theorem B.1, we present a refined bound based on CMIkD(A) which asymptotically match Theorem 1.1. The proofs of the results of this section appear in Appendix C. 1 Letting φ satisfy φ(Z) = IZ(X ;Y ) a.s., define I(X ,Y |Z = z) = φ(z). This notation is necessarily well defined only up to a null set under the marginal distribution of Z. 3 Sharpened Bounds based on Individual Samples We now present two novel generalization bounds and show they provide a tighter characterization of the generalization error than Theorem 1.3. The results are inspired by the improvements on IOMID(A) due to Bu et al. [6]. In particular, Theorem 3.1 bounds the expected generalization error in terms of the mutual information between the output parameter and a random subsequence of the indices U (2), given the super-sample. Theorem 3.4 provides a generalization bound in terms of the disintegrated mutual information between each individual element of U (2) and the output of the learning algorithm, W . The bound in Theorem 3.4 is an analogue of [6, Prop. 1] for Theorem 1.3. In this section as in Steinke and Zakynthinou [21], we only consider Z̃(k) and U (k) with k = 2, so we will drop the superscript from U (k). Let U = (U1, . . . ,Un). The proofs for the results of this section appear in Appendix D. Theorem 3.1. Fix m ∈ [n] and let J = (J1, . . . ,Jm) be a random subset of [n], distributed uniformly among all subsets of size m and independent from W, Z̃(2), and U. Then EGED(A)≤ E √ 2IZ̃(2)(W ;UJ |J) m . (2) By applying Jensen’s inequality to Theorem 3.1, we obtain EGED(A)≤ √ 2I(W ;UJ |Z̃(2),J) m . (3) Our next results in Theorem 3.2 let us compare Eq. (3) for different values of m = |J|. Theorem 3.2. Let m1 < m2 ∈ [n], and let J(m1),J(m2) be random subsets of [n], distributed uniformly among all subsets of size m1 and m2, respectively, and independent from W, Z̃(2), and U. Then I(W ;UJ(m1) |Z̃(2),J(m1)) m1 ≤ I(W ;UJ(m2) |Z̃(2),J(m2)) m2 . (4) Consequently, taking m2 = n, for all 1≤ m1 ≤ n E √ 2 IZ̃(2)(W ;UJ(m1) |J(m1)) m1 ≤ √ 2I(W ;U |Z̃(2)) n . (5) Corollary 3.3. EGED(A) ≤ √ 2 I(W ;UJ |Z̃(2),J)/m. The case m = |J| = n is equivalent to Theorem 1.3. The bound is increasing in m ∈ [n], and, the tightest bound is achieved when m = |J|= 1. Also, Eq. (5) shows our bound in Theorem 3.1 is tighter than Theorem 1.3 for k = 2. To further tighten Theorem 3.2 when m = 1, we show that we can pull the expectation over both Z̃(2) and J outside the concave square-root function. Theorem 3.4. Let J ∼ Unif([n]) (i.e., m = 1 above) be independent from W, Z̃(2), and U. Then EGED(A)≤E √ 2IZ̃(2),J(W ;UJ) = 1 n n ∑ i=1 E √ 2IZ̃(2)(W ;Ui). (6) Remark 3.5. Theorem 3.4 is tighter than Theorem 1.3 since 1 n n ∑ i=1 E √ 2IZ̃(2)(W ;Ui)≤ √ n ∑ i=1 2 n I(W ;Ui|Z̃(2))≤ √ 2 n I(W ;U |Z̃(2)) (7) The first inequality is Jensen’s, while the second follows from the independence of indices Ui. / 3.1 Controlling CMI bounds using KL Divergence It is often difficult to compute MI directly. One standard approach in the literature is to bound MI by the expectation of the KL divergence of the conditional distribution of the parameters given the data (the “posterior”) with respect to a “prior”. The statement below is adapted from Negrea et al. [15]. Lemma 3.6. Let X, Y , and Z be random elements. For all σ(Z)-measurable random probability measures P on the space of Y , IZ(X ;Y )≤ EZ [KL(PX ,Z [Y ]‖P)] a.s., with a.s. equality for P = EZ [PX ,Z [Y ]] = PZ [Y ]. We refer to the conditional law of W given S as the “posterior" of W given S, which we denote Q = PS[W ] = PZ̃(2),U [W ], and to P as the prior. This can be used in combination with, for example, Lemma 3.6 and Theorem 1.3 to obtain that for any Z̃(2)-measurable random prior P(Z̃(2)) EGED(A)≤ √ 2 I(W ;U |Z̃(2)) n ≤ √ 2E[KL(Q‖P(Z̃(2)))] n . (8) Note that the prior only has access to Z̃(2), therefore from its perspective the training set can take 2n different values. Alternatively, combining Lemma 3.6 and Theorem 3.1 yields EGED(A)≤ E √ 2 EZ̃(2)IZ̃(2)(W ;UJ |UJc ,J) m ≤ E √ 2EZ̃(2) [KL(Q‖P(Z̃(2),UJc ,J))] m . (9) In Eq. (9) the prior has access to n−m samples in the training set, SJc , because Z̃(2)UJc = SJc . However, since Z̃(2) is known to the prior, the training set can take only 2m distinct values from the point of view of the prior in Eq. (9). This is a significant reduction in the amount of information that can be carried by the indexes in UJ about the output hypothesis. Consequently, priors can be designed to better exploit the dependence of the output hypothesis and the index set. 3.2 Tighter Generalization bound for the case m = 1 Since the strategy above controls MI-based expressions via KL divergences, one may ask whether a bound derived with similar tools, but directly in terms of KL, can be tighter than the combination Lemma 3.6 and Theorem 3.1. The following result shows that for m = 1 a tighter bound can be derived by pulling the expectation over both UJc and J outside the concave square-root function. Theorem 3.7. Let J ∼ Unif([n]) be independent from W, U, and Z̃(2). Let Q = PZ̃(2),U [W ] and P be a σ(Z̃(2),UJc ,J)-measurable random probability measure. Then EGED(A)≤ E √ 2 KL(Q‖P). (10) Here, the KL divergence is between two σ(Z̃(2),J,U)-measurable random measures, so is random. 4 Generalization bounds for noisy, iterative algorithms We apply this new class of generalization bounds to non-convex learning. We analyze the Langevin dynamics (LD) algorithm [8], following the analysis pioneered by Pensia et al. [16]. The example we set here is a blueprint for building bounds for other iterative algorithms. Our approach is similar to the recent advances by Li et al. [14] and Negrea et al. [15], employing data-dependent estimates to obtain easily simulated bounds. We find our new results allow us to exploit past iterates to obtain tighter bounds. The influence of past iterates is seen to take the form of a hypothesis test. 4.1 Bounding Generalization Error via Hypothesis Testing The chain rule for KL divergence is a key ingredient of information-theoretic generalization error bounds for iterative algorithms [6, 14, 15, 16]. W{0,...,T} denotes the space of parameters generated by an iterative algorithm in T iterations. For any measure, ν , onW{0,...T}, and W ∼ ν , let ν0 denote the marginal law of W0, and νt| denote the conditional law of Wt given W0 . . .Wt−1. Lemma 4.1 (Chain Rule for KL). Let Q,P be probability measures onW{0,...,T} with Q0 = P0. The following lemma bounds the KL divergence involving the posterior for the terminal parameter with one involving the sum of the KL divergences over each individual step of the trajectory. Then KL(QT ‖PT )≤ KL(Q‖P) = ∑Tt=1 Q0:(t−1)[KL(Qt| ‖Pt|)] The benefit of using the chain rule to analyze the iterative algorithm are two-fold: first, we gain analytical tractability; many bounds that appear in the literature implicitly require this form of incrementation [6, 14, 15, 16]. Second, and novel to the present work, the information in the optimization trajectory can be exploited to identify U from the history of W . In order to understand how the prior may take advantage of information from the optimization trajectory, consider applying Lemma 4.1 to the KL term in Eq. (9). We have KL(QT ‖PT ( Z̃(2),UJc ,J ) )≤ T ∑ t=1 EZ̃ (2),UJc ,J [KL(Qt| ‖Pt| ( Z̃(2),UJc ,J ) )]. Here Pt| ( Z̃(2),UJc ,J ) is a σ(Z̃(2),UJc ,J,W0:t−1)-measurable random probability measure. The prior may use UJc , Z̃(2), and J to reduce the number of possible values that U can take to 2|J|. Moreover, since UJ is constant during optimization, W0,W1,W2, . . .Wt−1 may leak some information about UJ , and the prior can use this information to tighten the bound by choosing a Pt| that achieves small KL(Qt| ‖Pt|). In the special case where the prior can perfectly estimate UJ from W0,W1,W2, . . .Wt−1, we can set Pt| = Qt| and KL(Qt| ‖Pt|) will be zero. As will be seen in the next subsection, we can explicitly design a prior that uses the information in the optimization trajectory for the LD algorithm. The process by which the prior can learn from the trajectory can be viewed as an online hypothesis test, or binary decision problem, where the prior at time t allocates belief between 2m possible explanations, given by the possible values of UJ , based on the evidence provided by W0, . . .Wt . If the prior is able to identify UJ based on the W s then the bound stops accumulating, even if the gradients taken by subsequent training steps are large. This means that penalties for information obtained later in training are discounted based on the information obtained earlier in training. 4.2 Example: Langevin Dynamics Algorithm for Non-Convex Learning We apply these results to obtain generalization bounds for a gradient-based iterative noisy algorithm, the Langevin Dynamics (LD) algorithm. For classification with continuous parameters, the 0-1 loss does not provide useful gradients. Typically we optimize a surrogate objective, based on a surrogate loss, such as cross entropy. Write ˜̀ :Z×W→R for the surrogate loss and let R̃S(w) = 1n ∑ n i=1 ˜̀(Zi,w) be the empirical surrogate risk. Let ηt be the learning rate at time t, βt the inverse temperature at time t and let εt be sampled i.i.d. from N (0,Id). Then the LD algorithm iterates are given by Wt+1 =Wt −ηt∇R̃S(Wt)+ √ 2ηt βt εt . (11) The prior We will take m = 1, and construct a bespoke σ(Z̃(2),UJc ,J)-measurable prior for this problem in order to apply Theorem 3.7. The prior is based on a decision function, θ : R→ [0,1], which at each time t +1 takes in a σ(W0 . . .Wt)-measurable test statistic, ∆Yt , and returns a degree of belief in favor of the hypothesis UJ = 1 over UJ = 2. The prior predicts an LD step by replacing the unknown (to the prior) contribution to the gradient of the data point at index J with a θ̂t = θ(∆Yt)weighted average of the gradients due to each candidate {Z1,J ,Z2,J}.The conditional law of the tth iterate under the prior is a σ(Z̃(2),UJc ,J,W0, . . .Wt)-measurable random measure, as required. The exact value of the test statistic is ∆Yt = Yt,2−Yt,1, here the Y0,1 = Y0,2 = 0 and Yt,u are defined by the formula in Eq. (13). The conditional law of the tth iterate under the prior is described by Wt+1 =Wt − ηtn ( ∑ni=1 i6=J ∇ ˜̀(Zi,Wt)+ θ̂t∇ ˜̀(Z1,J ,Wt)+(1− θ̂t)∇ ˜̀(Z2,J ,Wt) ) + √ 2ηt βt εt . (12) The test statistic chosen is based on the log-likelihood-ratio test statistic for the independent mean 0 Gaussian random vectors (εs)ts=1, which is well known to be uniformly most powerful for the binary discrimination of means. Natural choices for θ are symmetric CDFs, since they treat possible values of U symmetrically, and are monotone in the test statistic. We define the two-sample incoherence at time t by ζt =∇ ˜̀(Z1,J ,Wt)−∇ ˜̀(Z2,J ,Wt). Θ denotes the set of measurable θ : R→ [0,1]. Y0,1 =Y0,2 = 0, and for t ≥ 1, Yt,1 and Yt,2 are given by (for u ∈ {1,2}) Yt,u , t ∑ i=1 βi−1 4ηi−1 ‖Wi−Wi−1 +ηi−1 n−1 n ∇R̃SJc (Wi−1)+ ηi−1 n ∇ ˜̀(Zu,J ,Wi−1)‖2. (13) Theorem 4.2 (Generalization bound for LD algorithm). Let {Wt}t∈[T ] denote the iterates of the LD algorithm. If `(Z,w) is [0,1]-bounded then E [ RD(WT )− R̂S(WT ) ] ≤ 1 n √ 2 inf θ∈Θ E √ T−1 ∑ t=0 EZ̃(2),U,Jβtηt‖ζt‖2 ( 1{UJ = 1}−θ (Yt,2−Yt,1) )2 . (14) Remark 4.3. For θ ∈Θ with 1−θ(x) = θ(−x), Eq. (14) simplifies to E [ RD(WT )− R̂S(WT ) ] ≤ 1 n √ 2 E √ T−1 ∑ t=0 EZ̃(2),U,Jβtηt‖ζt‖2θ 2 ( −1UJ (Yt,2−Yt,1) ) . (15) For instance θ(x) = 12 + 1 2 tanh(x) and θ(x) = 1 2 + 1 2 sign(x) satisfy 1−θ(x) = θ(−x). / Remark 4.4. By the law of total expectation, for any θ ∈Θ, EGED(A)≤ 12√2nE [V1 +V2] , where Vu , √ T−1 ∑ t=0 EZ̃(2),UJc ,J,UJ=uβtηt‖ζt‖2 (1{u = 1}−θ (Yt,2−Yt,1))2, u ∈ {1,2}. (16) To estimate Vu (u∈{1,2}) for fixed J, the training set is Su = {Z1, . . . ,ZJ−1, Z̃u,J ,ZJ+1, . . . ,Zn}. Hence V1, V2 can be simulated from just n+1 data points ( Z1, . . . ,ZJ−1,ZJ+1, . . . ,Zn, Z̃1,J , Z̃2,J ) ∼ D⊗(n+1). / The generalization bound in Eq. (14) does not place any restrictions on the learning rate or Lipschitz continuity of the loss or its gradient. In the next corollary we study the asymptotic properties of the bound in Eq. (14) when ˜̀ is L-Lipschitz. Then, we draw a comparison between the bound in this paper and some of the existing bounds in the literature. Corollary 4.5. Under the assumption that ˜̀ is L-Lipschitz, we have ‖ζt‖ ≤ 2L. Then, the generalization bound in Eq. (14) can be upper-bounded as E(RD(WT )−RS(WT ))≤ √ 2L n inf θ∈Θ E √ T−1 ∑ t=0 EZ̃(2),U,Jβtηt (1{UJ = 1}−θ (Yt,2−Yt,1))2. (17) Remark 4.6. Under an L-Lipschitz assumption, for the LD algorithm, Li et al. [14, Thm. 9] have E [RD(WT )−RS(WT )]≤ √ 2L n √ ∑T−1t=0 βtηt . (18) We immediately see that Eq. (17) provides a constant factor improvement over Eq. (18) by naïvely using θ : x 7→ 1/2. Our bound has order-wise improvement with respect to n over that of Bu et al. [6] and Pensia et al. [16] under the L-Lipschitz assumption. Negrea et al. [15, App. E.1] obtain E [RD(WT )−RS(WT )]≤ L2(n−1) √ ∑T−1t=0 βtηt . (19) which is a constant factor better than our bound for the choice θ : x 7→ 1/2. However, this θ essentially corresponds to no hypothesis test, yielding the same prior as in [15]. For more sophisticated choices of decision function (θ ), even under a Lipschitz-surrogate loss assumption, it is difficult to compare our bound with related work because the exact impact of θ -discounting is difficult to quantify analytically. / Remark 4.7. A prevailing method for analyzing the generalization error in [6, 14, 15, 16] for iterative algorithms is via the chain rule for KL, using priors for the joint distribution of weight vectors that are Markov, i.e., given the tth weight, the (t+1)th weight is conditionally independent from the trajectory so far. Existing results using this approach accumulate a "penalty" for each step. In [6, 14, 15], the penalty terms are, respectively, the squared Lipschitz constant, the squared norm of the gradients, and the trace of the minibatch gradient covariance. The penalty term in our paper is the squared norm of "two-sample incoherence", defined in Theorem 4.2 as the squared norm of the difference between the gradient of a randomly selected training point and the held-out point. However, the use of chain rule along with existing “Markovian” priors introduces a source of looseness, i.e., the accumulating penalty may diverge to +∞ yielding vacuous bounds (as seen in Fig. 1). The distinguishing feature of our data-dependent CMI analysis is that the penalty terms get “filtered” by the online hypothesis test via our non-Markovian prior, i.e., our prediction for t +1 depends on whole trajectory. When the true index can be inferred from the previous weights, then the penalty essentially stops accumulating. / 4.2.1 Empirical Results In order to better understand the effect of discounting and the degree of improvement due to our new bounds and more sophisticated prior, we turn to simulation studies. We present and compare the empirical evaluations of the generalization bound in Theorem 4.2 with the data-dependent generalization bounds in Li et al. [14] and Negrea et al. [15]. For brevity, many of the details behind our implementation are deferred to Appendix G. The functional form of our bounds and [14, 15] are nearly identical as all of them use the chain rule for KL divergence. Nevertheless, the summands appearing in the bounds are different. The bound in [14] depends on the squared surrogate loss gradients norm, and the generalization bound in Negrea et al. [15] depends on the squared norm of training set incoherence defined as ‖∇ ˜̀(ZJ ,Wt)− 1n−1 ∑i∈[n],i6=J∇ ˜̀(Zi,Wt)‖2 where the training set is {Z1, . . . ,Zn} and J ∼ Unif([n]). The first key difference between our bound and others is that the summand in our bound consists of two terms: squared norm of the two-sample incoherence, i.e., ‖ζt‖2, and the squared error probability of a hypothesis test at time t, given by the term ( 1{UJ = 1}−θ ( ∑ti=0 (Yi,2−Yi,1) ))2 in our bound. A consequence of this, and the second fundamental difference between our bound and existing bounds, is that our bound exhibits a trade-off in ‖ζt‖2 because large ‖ζt‖2 will make the error of the hypothesis test small on future iterations, whereas the bounds in [14, 15] are uniformly increasing with respect to the squared norm of surrogate loss gradients and the training set incoherence, respectively. In this section we empirically evaluate and compare our bound with related work across various neural network architectures and datasets. Using Monte Carlo (MC) simulation, we compared estimates of our expected generalization error bounds with estimates of the bound from [14, 15] for the MNIST [13], CIFAR10 [12], and FashionMNIST [23] datasets in Fig. 1 and Table 1. For all the plots we consider θ(x) = 12 (1+erf(x)) for our bound. Also, in the last row of Table 1, we report the unbiased estimate of our bound optimized over the choice of θ function. We plot the squared norm of the two sample incoherence and training set incoherence, as well as the squared error probability of the hypothesis test. Fig. 1 and Table 1 show that our bound is tighter, and remain non-vacuous after many more iterations. We also observe that the variances for MC estimates of our bound are smaller than those of Negrea et al. [15], and it is also smaller than Li et al. [14] for CIFAR10 and MNIST-CNN experiments. Moreover, we observe that the error probability of the hypothesis test decays with the number of iterations, which matches the intuition that, as one observes more noisy increments of the process, one is more able to determine which point is contributing to the gradient. For CIFAR10, ‖ζt‖2 is large because the generalization gap is large. However, as mentioned in the beginning of this section, large ‖ζt‖2 makes the hypothesis testing easier on subsequent iterations. For instance, after iteration 600 the error is vanishingly small for CIFAR10 experiments which results in a plateau region in the bound. We can also observe the same phenomenon for the Fashion-MNIST experiment. This property distinguishes our bound from those in [14, 15]. Results for MNIST with CNN demonstrate that ‖ζt‖2 and training set incoherence are close to each other. The reason behind this observations is that the generalization gap is small. Also, for this experiment the performance of the hypothesis testing is only slightly better than random guessing since the generalization gap is small, and it is difficult to distinguish the training samples from the test samples. This observation explains why our generalization bound is close to that of [15]. Nevertheless, the hypothesis testing performance improves with more training iterations, leading the two bounds to diverge, with our new bound performing better at later iterations. Finally, the scaling of our bound with respect to the number of iteration is tighter than in the bounds in [14, 15] as can be seen in Fig. 1. 0 100 200 300 400 500 600 700 800 900 0.0 0.2 0.4 0.6 0.8 1.0 M N IS T w it h M LP Expected generalization error Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 100 200 300 400 500 600 700 800 900 60 80 100 120 140 160 180 Squared norm of the incoherence CMI bound (ours) Negrea et al. (2019) 0 100 200 300 400 500 600 700 800 900 0.14 0.16 0.18 0.20 0.22 0.24 0.26 Squared error probability of HT 0 100 200 300 400 500 600 700 0.0 0.2 0.4 0.6 0.8 1.0 M N IS T w it h C N N Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 100 200 300 400 500 600 700 200 400 600 800 1000 CMI bound (ours) Negrea et al. (2019) 0 100 200 300 400 500 600 700 0.20 0.22 0.24 0.26 0.28 0 260 520 780 1040 1300 0.0 0.2 0.4 0.6 0.8 1.0 F -M N IS T w it h C N N Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 260 520 780 1040 1300 200 400 600 800 1000 1200 1400 CMI bound (ours) Negrea et al. (2019) 0 260 520 780 1040 1300 0.09 0.13 0.17 0.21 0.25 0.29 2× 10−1 0 230 460 690 920 1150 1380 1610 1840 2070 2300 0.0 0.2 0.4 0.6 0.8 1.0 C IF A R w it h C N N Negrea et al. (2019) Li, Luo, and Qiao (2020) CMI bound (ours) 0 230 460 690 920 1150 1380 1610 1840 2070 2300 0 500 1000 1500 2000 2500 3000 CMI bound (ours) Negrea et al. (2019) 0 230 460 690 920 1150 1380 1610 1840 2070 2300 10−4 10−3 10−2 10−1 Figure 1: Numerical results for various datasets and architectures. All the x-axes represent the training iteration. The plots in the first column depict a Monte Carlo estimate of our bounds with that of Li et al. [14] and Negrea et al. [15]. The plots in the second column compare the mean of the training set incoherence in [15] with the two-sample incoherence in our bound. Finally, the plots in the third column show the mean of the squared error probability of the hypothesis testing performed by the proposed prior in our bound. MNIST-MLP MNIST-CNN CIFAR10-CNN FMNIST-CNN Training error 4.33±0.01% 2.59±0.01% 9.39±0.36% 7.96±0.03% Generalization error 0.88±0.01% 0.55±0.01% 32.89±0.44% 3.71±0.03% Negrea et al. [15] 67.93±16.25% 20.98±5.01% 4112.63±567.08% 82.89±12.64% Li et al. [14] 600.29±1.99% 245.03±2.37% 20754.32±75.95% 598.62±3.21% CMI (Ours) 44.65±4.27% 16.51±1.41% 71.76±4.82% 48.01±4.22% CMI-OPT(Ours) 39.06±5.52% 13.24±1.53% 63.00±5.97% 41.17±5.85% Table 1: Summary of the results. The generalization bounds are reported at the end of training. Acknowledgments The authors would like to thank Blair Bilodeau and Yasaman Mahdaviyeh for feedback on drafts of this work, and Shiva Ketabi for helpful discussions on the implementation of the bounds. Funding MH is supported by the Vector Institute. JN is supported by an NSERC Vanier Canada Graduate Scholarship, and by the Vector Institute. DMR is supported by an NSERC Discovery Grant and an Ontario Early Researcher Award. This research was carried out in part while MH, JN, GKD, and DMR were visiting the Institute for Advanced Study. JN’s visit to the Institute was funded by an NSERC Michael Smith Foreign Study Supplement. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/partners. Broader Impact This work builds upon the community’s understanding of generalization error for machine learning methods. This has a positive impact on the scientific advancement of the field, and may lead to further improvements in our understanding, methodologies and applications of machine learning and AI. While there are not obvious direct societal implications of the present work, the indirect and longer term impact on society may be positive, negative or both depending on how, where and for what machine learning method that will have benefited from our research are used in the future.
1. What is the focus and contribution of the paper regarding generalization error bounds? 2. What are the strengths of the proposed approach, particularly in exploiting previous works? 3. What are the weaknesses of the paper, especially in terms of ignoring subgaussian parameters and comparing CMI and IOMI? 4. How can the proposed generalization bounds be used in practice to guide algorithm design? 5. What are the limitations of computing or estimating the bound in high-dimensional cases, such as in neural networks with many parameters?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper is along the recent very popular line of work on information theoretic generalization error bound. This paper first characterizes the relationship between two types of mutual information based generalization bounnd, and then proposed their own bound based on the idea of individual sample in [6] and the data dependent idea in [15]. Application to noisy and iterative algorithms are investigated. Numerical results are provided to validate their claims. I have read the authors' response. My concerns are properly addressed, and therefore I raised my score to 7. Strengths This paper exploits ideas from [6] and [15] and developed a tighter generalization error bound. Weaknesses 1. Theorem 1.1 ignores the subgaussian parameter in [18] and [24]. Same for theorem 1.2. 2. In section 2, the CMI and IOMI are comapred. Is it meaningful to compare only these two quantities? Since they may appear in the genearliation bound with different coefficient before them. 3. One generic question is that how do we exactly use these generalization bounds in practice to guide algorithm design? 4. When applied to neural networks with a large number of parameters, how to compute the bound (or how to estimate the bound)? In this high-dimensional case, most mutual information estimator will fail. If a bound is not computable, how can we use it in practice?
NIPS
Title Why Is My Classifier Discriminatory? Abstract Recent attempts to achieve fairness in predictive models focus on the balance between fairness and accuracy. In sensitive applications such as healthcare or criminal justice, this trade-off is often undesirable as any increase in prediction error could have devastating consequences. In this work, we argue that the fairness of predictions should be evaluated in context of the data, and that unfairness induced by inadequate samples sizes or unmeasured predictive variables should be addressed through data collection, rather than by constraining the model. We decompose cost-based metrics of discrimination into bias, variance, and noise, and propose actions aimed at estimating and reducing each term. Finally, we perform case-studies on prediction of income, mortality, and review ratings, confirming the value of this analysis. We find that data collection is often a means to reduce discrimination without sacrificing accuracy. 1 Introduction As machine learning algorithms increasingly affect decision making in society, many have raised concerns about the fairness and biases of these algorithms, especially in applications to healthcare or criminal justice, where human lives are at stake (Angwin et al., 2016; Barocas & Selbst, 2016). It is often hoped that the use of automatic decision support systems trained on observational data will remove human bias and improve accuracy. However, factors such as data quality and model choice may encode unintentional discrimination, resulting in systematic disparate impact. We study fairness in prediction of outcomes such as recidivism, annual income, or patient mortality. Fairness is evaluated with respect to protected groups of individuals defined by attributes such as gender or ethnicity (Ruggieri et al., 2010). Following previous work, we measure discrimination in terms of differences in prediction cost across protected groups (Calders & Verwer, 2010; Dwork et al., 2012; Feldman et al., 2015). Correcting for issues of data provenance and historical bias in labels is outside of the scope of this work. Much research has been devoted to constraining models to satisfy cost-based fairness in prediction, as we expand on below. The impact of data collection on discrimination has received comparatively little attention. Fairness in prediction has been encouraged by adjusting models through regularization (Bechavod & Ligett, 2017; Kamishima et al., 2011), constraints (Kamiran et al., 2010; Zafar et al., 2017), and representation learning (Zemel et al., 2013). These attempts can be broadly categorized as modelbased approaches to fairness. Others have applied data preprocessing to reduce discrimination (Hajian & Domingo-Ferrer, 2013; Feldman et al., 2015; Calmon et al., 2017). For an empirical comparison, see for example Friedler et al. (2018). Inevitably, however, restricting the model class or perturbing training data to improve fairness may harm predictive accuracy (Corbett-Davies et al., 2017). A tradeoff of predictive accuracy for fairness is sometimes difficult to motivate when predictions influence high-stakes decisions. In particular, post-hoc correction methods based on randomizing predictions (Hardt et al., 2016; Pleiss et al., 2017) are unjustifiable for ethical reasons in clinical tasks 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. such as severity scoring. Moreover, as pointed out by Woodworth et al. (2017), post-hoc correction may lead to suboptimal predictive accuracy compared to other equally fair classifiers. Disparate predictive accuracy can often be explained by insufficient or skewed sample sizes or inherent unpredictability of the outcome given the available set of variables. With this in mind, we propose that fairness of predictive models should be analyzed in terms of model bias, model variance, and outcome noise before they are constrained to satisfy fairness criteria. This exposes and separates the adverse impact of inadequate data collection and the choice of the model on fairness. The cost of fairness need not always be one of predictive accuracy, but one of investment in data collection and model development. In high-stakes applications, the benefits often outweigh the costs. In this work, we use the term “discrimination" to refer to specific kinds of differences in the predictive power of models when applied to different protected groups. In some domains, such differences may not be considered discriminatory, and it is critical that decisions made based on this information are sensitive to this fact. For example, in prior work, researchers showed that causal inference may help uncover which sources of differences in predictive accuracy introduce unfairness (Kusner et al., 2017). In this work, we assume that observed differences are considered discriminatory and discuss various means of explaining and reducing them. Main contributions We give a procedure for analyzing discrimination in predictive models with respect to cost-based definitions of group fairness, emphasizing the impact of data collection. First, we propose the use of bias-variance-noise decompositions for separating sources of discrimination. Second, we suggest procedures for estimating the value of collecting additional training samples. Finally, we propose the use of clustering for identifying subpopulations that are discriminated against to guide additional variable collection. We use these tools to analyze the fairness of common learning algorithms in three tasks: predicting income based on census data, predicting mortality of patients in critical care, and predicting book review ratings from text. We find that the accuracy in predictions of the mortality of cancer patients vary by as much as 20% between protected groups. In addition, our experiments confirm that discrimination level is sensitive to the quality of the training data. 2 Background We study fairness in prediction of an outcome Y ∈ Y . Predictions are based on a set of covariates X ∈ X ⊆ Rk and a protected attribute A ∈ A. In mortality prediction, X represents the medical history of a patient in critical care, A the self-reported ethnicity, and Y mortality. A model is considered fair if its errors are distributed similarly across protected groups, as measured by a cost function γ. Predictions learned from a training set d are denoted Ŷd := h(X,A) for some h : X × A → Y from a class H. The protected attribute is assumed to be binary, A = {0, 1}, but our results generalize to the non-binary case. A dataset d = {(xi, ai, yi)}ni=1 consists of n samples distributed according to p(X,A, Y ). When clear from context, we drop the subscript from Ŷd. A popular cost-based definition of fairness is the equalized odds criterion, which states that a binary classifier Ŷ is fair if its false negative rates (FNR) and false positive rates (FPR) are equal across groups (Hardt et al., 2016). We define FPR and FNR with respect to protected group a ∈ A by FPRa(Ŷ ) := EX [Ŷ | Y = 0, A = a], FNRa(Ŷ ) := EX [1− Ŷ | Y = 1, A = a] . Exact equality, FPR0(Ŷ ) = FPR1(Ŷ ), is often hard to verify or enforce in practice. Instead, we study the degree to which such constraints are violated. More generally, we use differences in cost functions γa between protected groups a ∈ A to define the level of discrimination Γ, Γγ(Ŷ ) := ∣∣∣γ0(Ŷ )− γ1(Ŷ )∣∣∣ . (1) In this work we study cost functions γa ∈ {FPRa,FNRa,ZOa} in binary classification tasks, with ZOa(Ŷ ) := EX [1[Ŷ 6= Y ] | A = a] the zero-one loss. In regression problems, we use the groupspecific mean-squared error MSEa := EX [(Ŷ − Y )2 | A = a]. According to (1), predictions Ŷ satisfy equalized odds on d if ΓFPR(Ŷ ) = 0 and ΓFNR(Ŷ ) = 0. Calibration and impossibility A score-based classifier is calibrated if the prediction score assigned to a unit equals the fraction of positive outcomes for all units assigned similar scores. It 1. .5 $(& ∣ ( = 1)$(& ∣ ( = 0) $(, ∣ &) Samples 4 5 (a) For identically distributed protected groups and unaware outcome (see below), bias and noise are equal in expectation. Perceived discrimination is only due to variance. !(# ∣ % = 0) 1. .5 !(, ∣ #) !(# ∣ % = 1) High noise Low noise 8 9 (b) Heteroskedastic noise, i.e. ∃x, x′ : N(x) 6= N(x′), may contribute to discrimination even for an optimal model if protected groups are not identically distributed. 1. .5 $(& ∣ () $(( ∣ * = 1) $(( ∣ * = 0) &- . / (c) One choice of model may be more suited for one protected group, even under negligible noise and variance, resulting in a difference in expected bias, B0 6= B1. Figure 1: Scenarios illustrating how properties of the training set and model choice affect perceived discrimination in a binary classification task, under the assumption that outcomes and predictions are unaware, i.e. p(Y | X,A) = p(Y | X) and p(Ŷ | X,A) = p(Ŷ | X). Through bias-variance-noise decompositions (see Section 3.1), we can identify which of these dominate in their effect on fairness. We propose procedures for addressing each component in Section 4, and use them in experiments (see Section 5) to mitigate discrimination in income prediction and prediction of ICU mortality. is impossible for a classifier to be calibrated in every protected group and satisfy multiple costbased fairness criteria at once, unless accuracy is perfect or base rates of outcomes are equal across groups (Chouldechova, 2017). A relaxed version of this result (Kleinberg et al., 2016) applies to the discrimination level Γ. Inevitably, both constraint-based methods and our approach are faced with a choice between which fairness criteria to satisfy, and at what cost. 3 Sources of perceived discrimination There are many potential sources of discrimination in predictive models. In particular, the choice of hypothesis class H and learning objective has received a lot of attention (Calders & Verwer, 2010; Zemel et al., 2013; Fish et al., 2016). However, data collection—the chosen set of predictive variables X , the sampling distribution p(X,A, Y ), and the training set size n—is an equally integral part of deploying fair machine learning systems in practice, and it should be guided to promote fairness. Below, we tease apart sources of discrimination through bias-variance-noise decompositions of cost-based fairness criteria. In general, we may think of noise in the outcome as the effect of a set of unobserved variables U , potentially interacting with X . Even the optimal achievable error for predictions based on X may be reduced further by observing parts of U . In Figure 1, we illustrate three common learning scenarios and study their fairness properties through bias, variance, and noise. To account for randomness in the sampling of training sets, we redefine discrimination level (1) in terms of the expected cost γa(Ŷ ) := ED[γa(ŶD)] over draws of a random training set D. Definition 1. The expected discrimination level Γ(Ŷ ) of a predictive model Ŷ learned from a random training set D, is Γ(Ŷ ) := ∣∣∣ED [γ0(ŶD)− γ1(ŶD)]∣∣∣ = ∣∣∣γ0(Ŷ )− γ1(Ŷ )∣∣∣ . Γ(Ŷ ) is not observed in practice when only a single training set d is available. If n is small, it is recommended to estimate Γ through re-sampling methods such as bootstrapping (Efron, 1992). 3.1 Bias-variance-noise decompositions of discrimination level An algorithm that learns models ŶD from datasets D is given, and the covariates X and size of the training data n are fixed. We assume that ŶD is a deterministic function ŷD(x, a) given the training set D, e.g. a thresholded scoring function. Following Domingos (2000), we base our analysis on decompositions of loss functions L evaluated at points (x, a). For decompositions of costs γa ∈ {ZO,FPR,FNR} we let this be the zero-one loss, L(y, y′) = 1[y 6= y′] , and for γa = MSE, the squared loss, L(y, y′) = (y − y′)2. We define the main prediction ỹ(x, a) = arg miny′ ED[L(ŶD, y ′) | X = x,A = a] as the average prediction over draws of training sets for the squared loss, and the majority vote for the zero-one loss. The (Bayes) optimal prediction y∗(x, a) = arg miny′ EY [L(Y, y ′) | X = x,A = a] achieves the smallest expected error with respect to the random outcome Y . Definition 2 (Bias, variance and noise). Following Domingos (2000), we define bias B, variance V and noise N at a point (x, a) below. B(Ŷ , x, a) = L(y∗(x, a), ỹ(x, a)) N(x, a) = EY [L(y∗(x, a), Y ) | X = x,A = a] V (Ŷ , x, a) = ED[L(ỹ(x, a), ŷD(x, a))] . (2) Here, y∗, ŷ and ỹ, are all deterministic functions of (x, a), while Y is a random variable. In words, the bias B is the loss incurred by the main prediction relative to the optimal prediction. The variance V is the average loss incurred by the predictions learned from different datasets relative to the main prediction. The noise N is the remaining loss independent of the learning algorithm, often known as the Bayes error. We use these definitions to decompose Γ under various definitions of γa. Theorem 1. With γa the group-specific zero-one loss or class-conditional versions (e.g. FNR, FPR), or the mean squared error, γa and the discrimination level Γ admit decompositions of the form γa(Ŷ ) = Na︸︷︷︸ Noise +Ba(Ŷ )︸ ︷︷ ︸ Bias + V a(Ŷ )︸ ︷︷ ︸ Variance and Γ = ∣∣(N0 −N1) + (B0 −B1) + (V 0 − V 1)∣∣ where we leave out Ŷ in the decomposition of Γ for brevity. With B, V defined as in (2), we have Ba(Ŷ ) = EX [B(ỹ, X, a) | A = a] and V a(Ŷ ) = EX,D[cv(X)V (ŶD, X, a) | A = a] . For the zero-one loss, cv(x, a) = 1 if ŷm(x, a) = y∗(x, a), otherwise cv(x, a) = −1. For the squared loss cv(x, a) = 1. The noise term for population losses is Na := EX [cn(X, a)L(y ∗(X, a), Y ) | A = a] and for class-conditional losses w.r.t class y ∈ {0, 1}, Na(y) := EX [cn(X, a)L(y ∗(X, a), y) | A = a, Y = y] . For the zero-one loss, and class-conditional variants, cn(x, a) = 2ED[1[ŷD(x, a) = y∗(x, a)]]− 1 and for the squared loss, cn(x, a) = 1. Proof sketch. Conditioning and exchanging order of expectation, the cases of mean squared error and zero-one losses follow from Domingos (2000). Class-conditional losses follow from a case-by-case analysis of possible errors. See the supplementary material for a full proof. Theorem 1 points to distinct sources of perceived discrimination. Significant differences in bias B0 − B1 indicate that the chosen model class is not flexible enough to fit both protected groups well (see Figure 1c). This is typical of (misspecified) linear models which approximate non-linear functions well only in small regions of the input space. Regularization or post-hoc correction of models effectively increase the bias of one of the groups, and should be considered only if there is reason to believe that the original bias is already minimal. Differences in variance, V 0 − V 1, could be caused by differences in sample sizes n0, n1 or groupconditional feature variance Var(X | A), combined with a high capacity model. Targeted collection of training samples may help resolve this issue. Our decomposition does not apply to post-hoc randomization methods (Hardt et al., 2016) but we may treat these in the same way as we do random training sets and interpret them as increasing the variance V a of one group to improve fairness. When noise is significantly different between protected groups, discrimination is partially unrelated to model choice and training set size and may only be reduced by measuring additional variables. Proposition 1. If N0 6= N1, no model can be 0-discriminatory in expectation without access to additional information or increasing bias or variance w.r.t. to the Bayes optimal classifier. Proof. By definition, Γ = 0 =⇒ (N1 −N0) = (B0 −B1) + (V 0 − V 1). As the Bayes optimal classifier has neither bias nor variance, the result follows immediately. In line with Proposition 1, most methods for ensuring algorithmic fairness reduce discrimination by trading off a difference in noise for one in bias or variance. However, this trade-off is only motivated if the considered predictive model is close to Bayes optimal and no additional predictive variables may be measured. Moreover, if noise is homoskedastic in regression settings, post-hoc randomization is ill-advised, as the difference in Bayes error N0 −N1 is zero, and discrimination is caused only by model bias or variance (see the supplementary material for a proof). Estimating bias, variance and noise Group-specific variance V a may be estimated through sample splitting or bootstrapping (Efron, 1992). In contrast, the noise Na and bias Ba are difficult to estimate whenX is high-dimensional or continuous. In fact, no convergence results of noise estimates may be obtained without further assumptions on the data distribution (Antos et al., 1999). Under some such assumptions, noise may be approximately estimated using distance-based methods (Devijver & Kittler, 1982), nearest-neighbor methods (Fukunaga & Hummels, 1987; Cover & Hart, 1967), or classifier ensembles (Tumer & Ghosh, 1996). When comparing the discrimination level of two different models, noise terms cancel, as they are independent of the model. As a result, differences in bias may be estimated even when the noise is not known (see the supplementary material). Testing for significant discrimination When sample sizes are small, perceived discrimination may not be statistically significant. In the supplementary material, we give statistical tests both for the discrimination level Γ(Ŷ ) and the difference in discrimination level between two models Ŷ , Ŷ ′. 4 Reducing discrimination through data collection In light of the decomposition of Theorem 1, we explore avenues for reducing group differences in bias, variance, and noise without sacrificing predictive accuracy. In practice, predictive accuracy is often artificially limited when data is expensive or impractical to collect. With an investment in training samples or measurement of predictive variables, both accuracy and fairness may be improved. 4.1 Increasing training set size Standard regularization used to avoid overfitting is not guaranteed to improve or preserve fairness. An alternative route is to collect more training samples and reduce the impact of the bias-variance trade-off. When supplementary data is collected from the same distribution as the existing set, covariate shift may be avoided (Quionero-Candela et al., 2009). This is often achievable; labeled data may be expensive, such as when paying experts to label observations, but given the means to acquire additional labels, they would be drawn from the original distribution. To estimate the value of increasing sample size, we predict the discrimination level Γ(ŶD) as D increases in size. The curve measuring generalization performance of predictive models as a function of training set size n is called a Type II learning curve (Domhan et al., 2015). We call γa(Ŷ , n) := E[γa(ŶDn)], as a function of n, the learning curve with respect to protected group a. We define the discrimination learning curve Γ(Ŷ , n) := |γ0(Ŷ , n) − γ1(Ŷ , n)| (see Figure 2a for an example). Empirically, learning curves behave asymptotically as inverse power-law curves for diverse algorithms such as deep neural networks, support vector machines, and nearest-neighbor classifiers, even when model capacity is allowed to grow with n (Hestness et al., 2017; Mukherjee et al., 2003). This observation is also supported by theoretical results (Amari, 1993). Assumption 1 (Learning curves). The population prediction loss γ(Ŷ , n), and group-specific losses γ0(Ŷ , n), γ1(Ŷ , n), for a fixed learning algorithm Ŷ , behave asymptotically as inverse power-law curves with parameters (α, β, δ). That is, ∃M,M0,M1 such that for n ≥M,na ≥Ma, γ(Ŷ , n) = αn−β + δ and ∀a ∈ A : γa(Ŷ , na) = αan−βaa + δa (3) Intercepts, δ, δa in (3) represent the asymptotic bias B(ŶD∞) and the Bayes error N , with the former vanishing for consistent estimators. Accurately estimating δ from finite samples is often challenging as the first term tends to dominate the learning curve for practical sample sizes. In experiments, we find that the inverse power-laws model fit group conditional (γa) and classconditional (FPR, FNR) errors well, and use these to extrapolate Γ(Ŷ , n) based on estimates from subsampled data. 4.2 Measuring additional variables When discrimination Γ is dominated by a difference in noise, N0−N1, fairness may not be improved through model selection alone without sacrificing accuracy (see Proposition 1). Such a scenario is likely when available covariates are not equally predictive of the outcome in both groups. We propose identification of clusters of individuals in which discrimination is high as a means to guide further variable collection—if the variance in outcomes within a cluster is not explained by the available feature set, additional variables may be used to further distinguish its members. Let a random variable C represent a (possibly stochastic) clustering such that C = c indicates membership in cluster c. Then let ρa(c) denote the expected prediction cost for units in cluster c with protected attribute a. As an example, for the zero-one loss we let ρZOa (c) := EX [1[Ŷ 6= Y ] | A = a,C = c], and define ρ analogously for false positives or false negatives. Clusters c for which |ρ0(c)− ρ1(c)| is large identify groups of individuals for which discrimination is worse than average, and can guide targeted collection of additional variables or samples. In our experiments on income prediction, we consider particularly simple clusterings of data defined by subjects with measurements above or below the average value of a single feature x(c) with c ∈ {1, . . . , k}. In mortality prediction, we cluster patients using topic modeling. As measuring additional variables is expensive, the utility of a candidate set should be estimated before collecting a large sample (Koepke & Bilenko, 2012). 5 Experiments We analyze the fairness properties of standard machine learning algorithms in three tasks: prediction of income based on national census data, prediction of patient mortality based on clinical notes, and prediction of book review ratings based on review text.1 We disentangle sources of discrimination by assessing the level of discrimination for the full data,estimating the value of increasing training set size by fitting Type II learning curves, and using clustering to identify subgroups where discrimination is high. In addition, we estimate the Bayes error through non-parametric techniques. In our experiments, we omit the sensitive attribute A from our classifiers to allow for closer comparison to previous works, e.g. Hardt et al. (2016); Zafar et al. (2017). In preliminary results, we found that fitting separate classifiers for each group increased the error rates of both groups due to the resulting smaller sample size, as classifiers could not learn from other groups. As our model objective is to maximize accuracy over all data points, our analysis uses a single classifier trained on the entire population. 5.1 Income prediction Predictions of a person’s salary may be used to help determine an individual’s market worth, but systematic underestimation of the salary of protected groups could harm their competitiveness on the job market. The Adult dataset in the UCI Machine Learning Repository (Lichman, 2013) contains 32,561 observations of yearly income (represented as a binary outcome: over or under $50,000) and twelve categorical or continuous features including education, age, and marital status. Categorical attributes are dichotomized, resulting in a total of 105 features. We follow Pleiss et al. (2017) and strive to ensure fairness across genders, which is excluded as a feature from the predictive models. Using an 80/20 train-test split, we learn a random forest predictor, which is is well-calibrated for both groups (Brier (1950) scores of 0.13 and 0.06 for men and women). We find the difference in zero-one loss ΓZO(Ŷ ) has a 95%-confidence interval2 .085±.069 with decision thresholds at 0.5. At this threshold, the false negative rates are 0.388±0.026 and 0.448± 0.064 for men and women respectively, and the false positive rates 0.111± 0.011 and 1A synthetic experiment validating group-specific learning curves is left to the supplementary material. 2Details for computing statistically significant discrimination can be found in the supplementary material. 103 104 Training set size, n (log scale) 0.15 0.10 0.09 0.08 D iff er en ce , Γ (l og sc al e) False Positive Rate False Negative Rate (a) Group differences in false positive rates and false negative rates for a random forest classifier decrease with increasing training set size. Method Elow Eup group Mahalanobis – 0.29 men (Mahalanobis, 1936) – 0.13 women Bhattacharyya 0.001 0.040 men (Bhattacharyya, 1943) 0.001 0.027 women Nearest Neighbors 0.10 0.19 men (Cover & Hart, 1967) 0.04 0.07 women (b) Estimation of Bayes error lower and upper bounds (Elow and Eup) for zero-one loss of men and women. Intervals for men and women are non-overlapping for Nearest Neighbors. Figure 2: Discrimination level and noise estimation in income prediction with the Adult dataset. 0.033± 0.008. We focus on random forest classifiers, although we found similar results for logistic regression and decision trees. We examine the effect of varying training set size n on discrimination. We fit inverse power-law curves to estimates of FPR(Ŷ , n) and FNR(Ŷ , n) using repeated sample splitting where at least 20% of the full data is held out for evaluating generalization error at every value of n. We tune hyperparameters for each training set size for decision tree classifiers and logistic regression but tuned over the entire dataset for random forest. We include full training details in the supplementary material. Metrics are averaged over 50 trials. See Figure 2a for the results for random forests. Both FPR and FNR decrease with additional training samples. The discrimination level ΓFNR for false negatives decreases by a striking 40% when increasing the training set size from 1000 to 10,000. This suggests that trading off accuracy for fairness at small sample sizes may be ill-advised. Based on fitted power-law curves, we estimate that for unlimited training data drawn from the same distribution, we would have ΓFNR(Ŷ ) ≈ 0.04 and ΓFPR(Ŷ ) ≈ 0.08. In Figure 2b, we compare estimated upper and lower bounds on noise (Elow and Eup) for men and women using the Mahalanobis and Bhattacharyya distances (Devijver & Kittler, 1982), and a k-nearest neighbor method (Cover & Hart, 1967) with k = 5 and 5-fold cross validation. Men have consistently higher noise estimates than women, which is consistent with the differences in zero-one loss found using all models. For nearest neighbors estimates, intervals for men and women are non-overlapping, which suggests that noise may contribute substantially to discrimination. To guide attempts at reducing discrimination further, we identify clusters of individuals for whom false negative predictions are made at different rates between protected groups, with the method described in Section 4.2. We find that for individuals in executive or managerial occupations (12% of the sample), false negatives are more than twice as frequent for women (0.412) as for men (0.157). For individuals in all other occupations, the difference is significantly smaller, 0.543 for women and 0.461 for men, despite the fact that the disparity in outcome base rates in this cluster is large (0.26 for men versus 0.09 for women). A possible reason is that in managerial occupations the available variable set explains a larger portion of the variance in salary for men than for women. If so, further sub-categorization of managerial occupations could help reduce discrimination in prediction. 5.2 Intensive care unit mortality prediction Unstructured medical data such as clinical notes can reveal insights for questions like mortality prediction; however, disparities in predictive accuracy may result in discrimination of protected groups. Using the MIMIC-III dataset of all clinical notes from 25,879 adult patients from Beth Israel Deaconess Medical Center (Johnson et al., 2016), we predict hospital mortality of patients in critical care. Fairness is studied with respect to five self-reported ethnic groups of the following proportions: Asian (2.2%), Black (8.8%), Hispanic (3.4%), White (70.8%), and Other (14.8%). Notes were collected in the first 48 hours of an intensive care unit (ICU) stay; discharge notes were excluded. We only included patients that stayed in the ICU for more than 48 hours. We use the tf-idf statistics of the 10,000 most frequent words as features. Training a model on 50% of the data, selecting hyper-parameters on 25%, and testing on 25%, we find that logistic regression with L1-regularization achieves an AUC of 0.81. The logistic regression is well-calibrated with Brier scores ranging from 0.06-0.11 across the five groups; we note better calibration is correlated with lower prediction error. We report cost and discrimination level in terms of generalized zero-one loss (Pleiss et al., 2017). Using an ANOVA test (Fisher, 1925) with p < 0.001, we reject the null hypothesis that loss is the same among all five groups. To map the 95% confidence intervals, we perform pairwise comparisons of means using Tukey’s range test (Tukey, 1949) across 5-fold cross-validation. As seen in Figure 3a, patients in the Other and Hispanic groups have the highest and lowest generalized zero-one loss, respectively, with relatively few overlapping intervals. Notably, the largest ethnic group (White) does not have the best accuracy, whereas smaller ethnic groups tend towards extremes. While racial groups differ in hospital mortality base rates (Table 1 in the Supplementary material), Hispanic (10.3%) and Black (10.9%) patients have very different error rates despite similar base rates. To better understand the discrimination induced by our model, we explore the effect of changing training set size. To this end, we repeatedly subsample and split the data, holding out at least 20% of the full data for testing. In Figure 3b, we show loss averaged over 50 trials of training a logistic regression on increasingly larger training sets; estimated inverse power-law curves show good fits. We see that some pairwise differences in loss decrease with additional training data. Next, we identify clusters for which the difference in prediction errors between protected groups is large. We learn a topic model with k = 50 topics generated using Latent Dirichlet Allocation (Blei et al., 2003). Topics are concatenated into an n× k matrix Q where qic designates the proportion of topic c ∈ [k] in note i ∈ [n]. Following prior work on enrichment of topics in clinical notes (Marlin et al., 2012; Ghassemi et al., 2014), we estimate the probability of patient mortality Y given a topic c as p̂(Y |C = c) := ( ∑n i=1 yiqic)/( ∑n i=1 qic) where yi is the hospital mortality of patient i. We compare relative error rates given protected group and topic using binary predicted mortality ŷi, actual mortality yi, and group ai for patient i through p̂(Ŷ 6= Y | A = a′, C = c) = ∑n i=1 1(yi 6= ŷi)1(ai = a′)qic∑n i=1 1(ai = a ′)qic which follows using substitution and conditioning on A. These error rates were computed using a logistic regression with L1 regularization using an 80/20 train-test split over 50 trials. While many topics have consistent error rates across groups, some topics (e.g. cardiac patients or cancer patients as shown in Figure 3c) have large differences in error rates across groups. We include more detailed topic descriptions in the supplementary material. Once we have identified a subpopulation with particularly high error, for example cancer patients, we can consider collecting more features or collecting more data from the same data distribution. We find that error rates differ between 0.12 and 0.30 across protected groups of cancer patients, and between 0.05 and 0.20 for cardiac patients. 5.3 Book review ratings In the supplementary material, we study prediction of book review ratings from review texts (Gnanesh, 2017). The protected attribute was chosen to be the gender of the author as determined from Wikipedia. In the dataset, the difference in mean-squared error ΓMSE(Ŷ ) has 95%-confidence interval 0.136 ± 0.048 with MSEM = 0.224 for reviews for male authors and MSEF = 0.358. Strikingly, our findings suggest that ΓMSE(Ŷ ) may be completely eliminated by additional targeted sampling of the less represented gender. 6 Discussion We identify that existing approaches for reducing discrimination induced by prediction errors may be unethical or impractical to apply in settings where predictive accuracy is critical, such as in healthcare or criminal justice. As an alternative, we propose a procedure for analyzing the different sources contributing to discrimination. Decomposing well-known definitions of cost-based fairness criteria in terms of differences in bias, variance, and noise, we suggest methods for reducing each term through model choice or additional training data collection. Case studies on three real-world datasets confirm that collection of additional samples is often sufficient to improve fairness, and that existing post-hoc methods for reducing discrimination may unnecessarily sacrifice predictive accuracy when other solutions are available. Looking forward, we can see several avenues for future research. In this work, we argue that identifying clusters or subpopulations with high predictive disparity would allow for more targeted ways to reduce discrimination. We encourage future research to dig deeper into the question of local or context-specific unfairness in general, and into algorithms for addressing it. Additionally, extending our analysis to intersectional fairness (Buolamwini & Gebru, 2018; Hébert-Johnson et al., 2017), e.g. looking at both gender and race or all subdivisions, would provide more nuanced grappling with unfairness. Finally, additional data collection to improve the model may cause unexpected delayed impacts (Liu et al., 2018) and negative feedback loops (Ensign et al., 2017) as a result of distributional shifts in the data. More broadly, we believe that the study of fairness in non-stationary populations is an interesting direction to pursue. Acknowledgements The authors would like to thank Yoni Halpern and Hunter Lang for helpful comments, and Zeshan Hussain for clinical guidance. This work was partially supported by Office of Naval Research Award No. N00014-17-1-2791 and NSF CAREER award #1350965.
1. What is the main contribution of the paper regarding fairness in machine learning? 2. What are the strengths of the proposed approach, particularly in comparison to other solutions in the field? 3. Do you have any concerns or suggestions regarding the experimental design or results? 4. How does the reviewer assess the novelty and practical applicability of the paper's ideas? 5. Are there any specific aspects of the paper that the reviewer finds unclear or confusing?
Review
Review This paper proposes looking at disparities in classifier outcomes across sensitive attributes as a problem to be remedied by data collection as opposed to classifier modification. The authors advocate decomposing the loss of a classifier into bias, variance, and noise, which allows for a more fine-grained analysis of the sources of disparate outcomes. Based on the this decomposition, the authors show that reducing disparity in outcome may require increasing the bias or variance of predictions. The authors go on to suggest that instead of constraining classifiers to produce parity in outcomes, collecting more data could be sufficient to reduce discrimination. In particular, they make the assumption that the learning curve (error as a function of training set size) follows an inverse power law, an assumption with both empirical and theoretical support. Using subsampling, the authors demonstrate that this curve is a good approximation for the types of errors considered and use the predictions it makes to estimate the reduction in disparity that could be achieved by collecting additional data. They perform several experiments on publicly available datasets to demonstrate these improvements. Finally, the authors suggest a framework for identifying clusters of individuals for whom additional features should be collected. Given a clustering, they propose a metric which indicates that the available features are more predictive on one subgroup than another, meaning that more features are required to improve predictive performance on the disadvantaged subgroup. They perform experiments with simple heuristics to choose clusters, finding some intuitively compelling explanations for disparities in predictive performance. Overall, I think this is a solid paper. It provides a fresh take on a problem that has received a lot of attention lately. Moreover, it provides an approach that, unlike many in the fairness space, could actually be used in practice. Targeted collection of data is far less controversial and more legally feasible than most of the solutions that have been proposed. Technically, the paper makes some nice, clean points; however, to me, the main contribution is conceptual as opposed to technical. The experiments on finding clusters with high disparity seem to be more of a proof of concept than a proposed solution, and I would have liked to see a more structured way of identifying such clusters.
NIPS
Title Why Is My Classifier Discriminatory? Abstract Recent attempts to achieve fairness in predictive models focus on the balance between fairness and accuracy. In sensitive applications such as healthcare or criminal justice, this trade-off is often undesirable as any increase in prediction error could have devastating consequences. In this work, we argue that the fairness of predictions should be evaluated in context of the data, and that unfairness induced by inadequate samples sizes or unmeasured predictive variables should be addressed through data collection, rather than by constraining the model. We decompose cost-based metrics of discrimination into bias, variance, and noise, and propose actions aimed at estimating and reducing each term. Finally, we perform case-studies on prediction of income, mortality, and review ratings, confirming the value of this analysis. We find that data collection is often a means to reduce discrimination without sacrificing accuracy. 1 Introduction As machine learning algorithms increasingly affect decision making in society, many have raised concerns about the fairness and biases of these algorithms, especially in applications to healthcare or criminal justice, where human lives are at stake (Angwin et al., 2016; Barocas & Selbst, 2016). It is often hoped that the use of automatic decision support systems trained on observational data will remove human bias and improve accuracy. However, factors such as data quality and model choice may encode unintentional discrimination, resulting in systematic disparate impact. We study fairness in prediction of outcomes such as recidivism, annual income, or patient mortality. Fairness is evaluated with respect to protected groups of individuals defined by attributes such as gender or ethnicity (Ruggieri et al., 2010). Following previous work, we measure discrimination in terms of differences in prediction cost across protected groups (Calders & Verwer, 2010; Dwork et al., 2012; Feldman et al., 2015). Correcting for issues of data provenance and historical bias in labels is outside of the scope of this work. Much research has been devoted to constraining models to satisfy cost-based fairness in prediction, as we expand on below. The impact of data collection on discrimination has received comparatively little attention. Fairness in prediction has been encouraged by adjusting models through regularization (Bechavod & Ligett, 2017; Kamishima et al., 2011), constraints (Kamiran et al., 2010; Zafar et al., 2017), and representation learning (Zemel et al., 2013). These attempts can be broadly categorized as modelbased approaches to fairness. Others have applied data preprocessing to reduce discrimination (Hajian & Domingo-Ferrer, 2013; Feldman et al., 2015; Calmon et al., 2017). For an empirical comparison, see for example Friedler et al. (2018). Inevitably, however, restricting the model class or perturbing training data to improve fairness may harm predictive accuracy (Corbett-Davies et al., 2017). A tradeoff of predictive accuracy for fairness is sometimes difficult to motivate when predictions influence high-stakes decisions. In particular, post-hoc correction methods based on randomizing predictions (Hardt et al., 2016; Pleiss et al., 2017) are unjustifiable for ethical reasons in clinical tasks 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. such as severity scoring. Moreover, as pointed out by Woodworth et al. (2017), post-hoc correction may lead to suboptimal predictive accuracy compared to other equally fair classifiers. Disparate predictive accuracy can often be explained by insufficient or skewed sample sizes or inherent unpredictability of the outcome given the available set of variables. With this in mind, we propose that fairness of predictive models should be analyzed in terms of model bias, model variance, and outcome noise before they are constrained to satisfy fairness criteria. This exposes and separates the adverse impact of inadequate data collection and the choice of the model on fairness. The cost of fairness need not always be one of predictive accuracy, but one of investment in data collection and model development. In high-stakes applications, the benefits often outweigh the costs. In this work, we use the term “discrimination" to refer to specific kinds of differences in the predictive power of models when applied to different protected groups. In some domains, such differences may not be considered discriminatory, and it is critical that decisions made based on this information are sensitive to this fact. For example, in prior work, researchers showed that causal inference may help uncover which sources of differences in predictive accuracy introduce unfairness (Kusner et al., 2017). In this work, we assume that observed differences are considered discriminatory and discuss various means of explaining and reducing them. Main contributions We give a procedure for analyzing discrimination in predictive models with respect to cost-based definitions of group fairness, emphasizing the impact of data collection. First, we propose the use of bias-variance-noise decompositions for separating sources of discrimination. Second, we suggest procedures for estimating the value of collecting additional training samples. Finally, we propose the use of clustering for identifying subpopulations that are discriminated against to guide additional variable collection. We use these tools to analyze the fairness of common learning algorithms in three tasks: predicting income based on census data, predicting mortality of patients in critical care, and predicting book review ratings from text. We find that the accuracy in predictions of the mortality of cancer patients vary by as much as 20% between protected groups. In addition, our experiments confirm that discrimination level is sensitive to the quality of the training data. 2 Background We study fairness in prediction of an outcome Y ∈ Y . Predictions are based on a set of covariates X ∈ X ⊆ Rk and a protected attribute A ∈ A. In mortality prediction, X represents the medical history of a patient in critical care, A the self-reported ethnicity, and Y mortality. A model is considered fair if its errors are distributed similarly across protected groups, as measured by a cost function γ. Predictions learned from a training set d are denoted Ŷd := h(X,A) for some h : X × A → Y from a class H. The protected attribute is assumed to be binary, A = {0, 1}, but our results generalize to the non-binary case. A dataset d = {(xi, ai, yi)}ni=1 consists of n samples distributed according to p(X,A, Y ). When clear from context, we drop the subscript from Ŷd. A popular cost-based definition of fairness is the equalized odds criterion, which states that a binary classifier Ŷ is fair if its false negative rates (FNR) and false positive rates (FPR) are equal across groups (Hardt et al., 2016). We define FPR and FNR with respect to protected group a ∈ A by FPRa(Ŷ ) := EX [Ŷ | Y = 0, A = a], FNRa(Ŷ ) := EX [1− Ŷ | Y = 1, A = a] . Exact equality, FPR0(Ŷ ) = FPR1(Ŷ ), is often hard to verify or enforce in practice. Instead, we study the degree to which such constraints are violated. More generally, we use differences in cost functions γa between protected groups a ∈ A to define the level of discrimination Γ, Γγ(Ŷ ) := ∣∣∣γ0(Ŷ )− γ1(Ŷ )∣∣∣ . (1) In this work we study cost functions γa ∈ {FPRa,FNRa,ZOa} in binary classification tasks, with ZOa(Ŷ ) := EX [1[Ŷ 6= Y ] | A = a] the zero-one loss. In regression problems, we use the groupspecific mean-squared error MSEa := EX [(Ŷ − Y )2 | A = a]. According to (1), predictions Ŷ satisfy equalized odds on d if ΓFPR(Ŷ ) = 0 and ΓFNR(Ŷ ) = 0. Calibration and impossibility A score-based classifier is calibrated if the prediction score assigned to a unit equals the fraction of positive outcomes for all units assigned similar scores. It 1. .5 $(& ∣ ( = 1)$(& ∣ ( = 0) $(, ∣ &) Samples 4 5 (a) For identically distributed protected groups and unaware outcome (see below), bias and noise are equal in expectation. Perceived discrimination is only due to variance. !(# ∣ % = 0) 1. .5 !(, ∣ #) !(# ∣ % = 1) High noise Low noise 8 9 (b) Heteroskedastic noise, i.e. ∃x, x′ : N(x) 6= N(x′), may contribute to discrimination even for an optimal model if protected groups are not identically distributed. 1. .5 $(& ∣ () $(( ∣ * = 1) $(( ∣ * = 0) &- . / (c) One choice of model may be more suited for one protected group, even under negligible noise and variance, resulting in a difference in expected bias, B0 6= B1. Figure 1: Scenarios illustrating how properties of the training set and model choice affect perceived discrimination in a binary classification task, under the assumption that outcomes and predictions are unaware, i.e. p(Y | X,A) = p(Y | X) and p(Ŷ | X,A) = p(Ŷ | X). Through bias-variance-noise decompositions (see Section 3.1), we can identify which of these dominate in their effect on fairness. We propose procedures for addressing each component in Section 4, and use them in experiments (see Section 5) to mitigate discrimination in income prediction and prediction of ICU mortality. is impossible for a classifier to be calibrated in every protected group and satisfy multiple costbased fairness criteria at once, unless accuracy is perfect or base rates of outcomes are equal across groups (Chouldechova, 2017). A relaxed version of this result (Kleinberg et al., 2016) applies to the discrimination level Γ. Inevitably, both constraint-based methods and our approach are faced with a choice between which fairness criteria to satisfy, and at what cost. 3 Sources of perceived discrimination There are many potential sources of discrimination in predictive models. In particular, the choice of hypothesis class H and learning objective has received a lot of attention (Calders & Verwer, 2010; Zemel et al., 2013; Fish et al., 2016). However, data collection—the chosen set of predictive variables X , the sampling distribution p(X,A, Y ), and the training set size n—is an equally integral part of deploying fair machine learning systems in practice, and it should be guided to promote fairness. Below, we tease apart sources of discrimination through bias-variance-noise decompositions of cost-based fairness criteria. In general, we may think of noise in the outcome as the effect of a set of unobserved variables U , potentially interacting with X . Even the optimal achievable error for predictions based on X may be reduced further by observing parts of U . In Figure 1, we illustrate three common learning scenarios and study their fairness properties through bias, variance, and noise. To account for randomness in the sampling of training sets, we redefine discrimination level (1) in terms of the expected cost γa(Ŷ ) := ED[γa(ŶD)] over draws of a random training set D. Definition 1. The expected discrimination level Γ(Ŷ ) of a predictive model Ŷ learned from a random training set D, is Γ(Ŷ ) := ∣∣∣ED [γ0(ŶD)− γ1(ŶD)]∣∣∣ = ∣∣∣γ0(Ŷ )− γ1(Ŷ )∣∣∣ . Γ(Ŷ ) is not observed in practice when only a single training set d is available. If n is small, it is recommended to estimate Γ through re-sampling methods such as bootstrapping (Efron, 1992). 3.1 Bias-variance-noise decompositions of discrimination level An algorithm that learns models ŶD from datasets D is given, and the covariates X and size of the training data n are fixed. We assume that ŶD is a deterministic function ŷD(x, a) given the training set D, e.g. a thresholded scoring function. Following Domingos (2000), we base our analysis on decompositions of loss functions L evaluated at points (x, a). For decompositions of costs γa ∈ {ZO,FPR,FNR} we let this be the zero-one loss, L(y, y′) = 1[y 6= y′] , and for γa = MSE, the squared loss, L(y, y′) = (y − y′)2. We define the main prediction ỹ(x, a) = arg miny′ ED[L(ŶD, y ′) | X = x,A = a] as the average prediction over draws of training sets for the squared loss, and the majority vote for the zero-one loss. The (Bayes) optimal prediction y∗(x, a) = arg miny′ EY [L(Y, y ′) | X = x,A = a] achieves the smallest expected error with respect to the random outcome Y . Definition 2 (Bias, variance and noise). Following Domingos (2000), we define bias B, variance V and noise N at a point (x, a) below. B(Ŷ , x, a) = L(y∗(x, a), ỹ(x, a)) N(x, a) = EY [L(y∗(x, a), Y ) | X = x,A = a] V (Ŷ , x, a) = ED[L(ỹ(x, a), ŷD(x, a))] . (2) Here, y∗, ŷ and ỹ, are all deterministic functions of (x, a), while Y is a random variable. In words, the bias B is the loss incurred by the main prediction relative to the optimal prediction. The variance V is the average loss incurred by the predictions learned from different datasets relative to the main prediction. The noise N is the remaining loss independent of the learning algorithm, often known as the Bayes error. We use these definitions to decompose Γ under various definitions of γa. Theorem 1. With γa the group-specific zero-one loss or class-conditional versions (e.g. FNR, FPR), or the mean squared error, γa and the discrimination level Γ admit decompositions of the form γa(Ŷ ) = Na︸︷︷︸ Noise +Ba(Ŷ )︸ ︷︷ ︸ Bias + V a(Ŷ )︸ ︷︷ ︸ Variance and Γ = ∣∣(N0 −N1) + (B0 −B1) + (V 0 − V 1)∣∣ where we leave out Ŷ in the decomposition of Γ for brevity. With B, V defined as in (2), we have Ba(Ŷ ) = EX [B(ỹ, X, a) | A = a] and V a(Ŷ ) = EX,D[cv(X)V (ŶD, X, a) | A = a] . For the zero-one loss, cv(x, a) = 1 if ŷm(x, a) = y∗(x, a), otherwise cv(x, a) = −1. For the squared loss cv(x, a) = 1. The noise term for population losses is Na := EX [cn(X, a)L(y ∗(X, a), Y ) | A = a] and for class-conditional losses w.r.t class y ∈ {0, 1}, Na(y) := EX [cn(X, a)L(y ∗(X, a), y) | A = a, Y = y] . For the zero-one loss, and class-conditional variants, cn(x, a) = 2ED[1[ŷD(x, a) = y∗(x, a)]]− 1 and for the squared loss, cn(x, a) = 1. Proof sketch. Conditioning and exchanging order of expectation, the cases of mean squared error and zero-one losses follow from Domingos (2000). Class-conditional losses follow from a case-by-case analysis of possible errors. See the supplementary material for a full proof. Theorem 1 points to distinct sources of perceived discrimination. Significant differences in bias B0 − B1 indicate that the chosen model class is not flexible enough to fit both protected groups well (see Figure 1c). This is typical of (misspecified) linear models which approximate non-linear functions well only in small regions of the input space. Regularization or post-hoc correction of models effectively increase the bias of one of the groups, and should be considered only if there is reason to believe that the original bias is already minimal. Differences in variance, V 0 − V 1, could be caused by differences in sample sizes n0, n1 or groupconditional feature variance Var(X | A), combined with a high capacity model. Targeted collection of training samples may help resolve this issue. Our decomposition does not apply to post-hoc randomization methods (Hardt et al., 2016) but we may treat these in the same way as we do random training sets and interpret them as increasing the variance V a of one group to improve fairness. When noise is significantly different between protected groups, discrimination is partially unrelated to model choice and training set size and may only be reduced by measuring additional variables. Proposition 1. If N0 6= N1, no model can be 0-discriminatory in expectation without access to additional information or increasing bias or variance w.r.t. to the Bayes optimal classifier. Proof. By definition, Γ = 0 =⇒ (N1 −N0) = (B0 −B1) + (V 0 − V 1). As the Bayes optimal classifier has neither bias nor variance, the result follows immediately. In line with Proposition 1, most methods for ensuring algorithmic fairness reduce discrimination by trading off a difference in noise for one in bias or variance. However, this trade-off is only motivated if the considered predictive model is close to Bayes optimal and no additional predictive variables may be measured. Moreover, if noise is homoskedastic in regression settings, post-hoc randomization is ill-advised, as the difference in Bayes error N0 −N1 is zero, and discrimination is caused only by model bias or variance (see the supplementary material for a proof). Estimating bias, variance and noise Group-specific variance V a may be estimated through sample splitting or bootstrapping (Efron, 1992). In contrast, the noise Na and bias Ba are difficult to estimate whenX is high-dimensional or continuous. In fact, no convergence results of noise estimates may be obtained without further assumptions on the data distribution (Antos et al., 1999). Under some such assumptions, noise may be approximately estimated using distance-based methods (Devijver & Kittler, 1982), nearest-neighbor methods (Fukunaga & Hummels, 1987; Cover & Hart, 1967), or classifier ensembles (Tumer & Ghosh, 1996). When comparing the discrimination level of two different models, noise terms cancel, as they are independent of the model. As a result, differences in bias may be estimated even when the noise is not known (see the supplementary material). Testing for significant discrimination When sample sizes are small, perceived discrimination may not be statistically significant. In the supplementary material, we give statistical tests both for the discrimination level Γ(Ŷ ) and the difference in discrimination level between two models Ŷ , Ŷ ′. 4 Reducing discrimination through data collection In light of the decomposition of Theorem 1, we explore avenues for reducing group differences in bias, variance, and noise without sacrificing predictive accuracy. In practice, predictive accuracy is often artificially limited when data is expensive or impractical to collect. With an investment in training samples or measurement of predictive variables, both accuracy and fairness may be improved. 4.1 Increasing training set size Standard regularization used to avoid overfitting is not guaranteed to improve or preserve fairness. An alternative route is to collect more training samples and reduce the impact of the bias-variance trade-off. When supplementary data is collected from the same distribution as the existing set, covariate shift may be avoided (Quionero-Candela et al., 2009). This is often achievable; labeled data may be expensive, such as when paying experts to label observations, but given the means to acquire additional labels, they would be drawn from the original distribution. To estimate the value of increasing sample size, we predict the discrimination level Γ(ŶD) as D increases in size. The curve measuring generalization performance of predictive models as a function of training set size n is called a Type II learning curve (Domhan et al., 2015). We call γa(Ŷ , n) := E[γa(ŶDn)], as a function of n, the learning curve with respect to protected group a. We define the discrimination learning curve Γ(Ŷ , n) := |γ0(Ŷ , n) − γ1(Ŷ , n)| (see Figure 2a for an example). Empirically, learning curves behave asymptotically as inverse power-law curves for diverse algorithms such as deep neural networks, support vector machines, and nearest-neighbor classifiers, even when model capacity is allowed to grow with n (Hestness et al., 2017; Mukherjee et al., 2003). This observation is also supported by theoretical results (Amari, 1993). Assumption 1 (Learning curves). The population prediction loss γ(Ŷ , n), and group-specific losses γ0(Ŷ , n), γ1(Ŷ , n), for a fixed learning algorithm Ŷ , behave asymptotically as inverse power-law curves with parameters (α, β, δ). That is, ∃M,M0,M1 such that for n ≥M,na ≥Ma, γ(Ŷ , n) = αn−β + δ and ∀a ∈ A : γa(Ŷ , na) = αan−βaa + δa (3) Intercepts, δ, δa in (3) represent the asymptotic bias B(ŶD∞) and the Bayes error N , with the former vanishing for consistent estimators. Accurately estimating δ from finite samples is often challenging as the first term tends to dominate the learning curve for practical sample sizes. In experiments, we find that the inverse power-laws model fit group conditional (γa) and classconditional (FPR, FNR) errors well, and use these to extrapolate Γ(Ŷ , n) based on estimates from subsampled data. 4.2 Measuring additional variables When discrimination Γ is dominated by a difference in noise, N0−N1, fairness may not be improved through model selection alone without sacrificing accuracy (see Proposition 1). Such a scenario is likely when available covariates are not equally predictive of the outcome in both groups. We propose identification of clusters of individuals in which discrimination is high as a means to guide further variable collection—if the variance in outcomes within a cluster is not explained by the available feature set, additional variables may be used to further distinguish its members. Let a random variable C represent a (possibly stochastic) clustering such that C = c indicates membership in cluster c. Then let ρa(c) denote the expected prediction cost for units in cluster c with protected attribute a. As an example, for the zero-one loss we let ρZOa (c) := EX [1[Ŷ 6= Y ] | A = a,C = c], and define ρ analogously for false positives or false negatives. Clusters c for which |ρ0(c)− ρ1(c)| is large identify groups of individuals for which discrimination is worse than average, and can guide targeted collection of additional variables or samples. In our experiments on income prediction, we consider particularly simple clusterings of data defined by subjects with measurements above or below the average value of a single feature x(c) with c ∈ {1, . . . , k}. In mortality prediction, we cluster patients using topic modeling. As measuring additional variables is expensive, the utility of a candidate set should be estimated before collecting a large sample (Koepke & Bilenko, 2012). 5 Experiments We analyze the fairness properties of standard machine learning algorithms in three tasks: prediction of income based on national census data, prediction of patient mortality based on clinical notes, and prediction of book review ratings based on review text.1 We disentangle sources of discrimination by assessing the level of discrimination for the full data,estimating the value of increasing training set size by fitting Type II learning curves, and using clustering to identify subgroups where discrimination is high. In addition, we estimate the Bayes error through non-parametric techniques. In our experiments, we omit the sensitive attribute A from our classifiers to allow for closer comparison to previous works, e.g. Hardt et al. (2016); Zafar et al. (2017). In preliminary results, we found that fitting separate classifiers for each group increased the error rates of both groups due to the resulting smaller sample size, as classifiers could not learn from other groups. As our model objective is to maximize accuracy over all data points, our analysis uses a single classifier trained on the entire population. 5.1 Income prediction Predictions of a person’s salary may be used to help determine an individual’s market worth, but systematic underestimation of the salary of protected groups could harm their competitiveness on the job market. The Adult dataset in the UCI Machine Learning Repository (Lichman, 2013) contains 32,561 observations of yearly income (represented as a binary outcome: over or under $50,000) and twelve categorical or continuous features including education, age, and marital status. Categorical attributes are dichotomized, resulting in a total of 105 features. We follow Pleiss et al. (2017) and strive to ensure fairness across genders, which is excluded as a feature from the predictive models. Using an 80/20 train-test split, we learn a random forest predictor, which is is well-calibrated for both groups (Brier (1950) scores of 0.13 and 0.06 for men and women). We find the difference in zero-one loss ΓZO(Ŷ ) has a 95%-confidence interval2 .085±.069 with decision thresholds at 0.5. At this threshold, the false negative rates are 0.388±0.026 and 0.448± 0.064 for men and women respectively, and the false positive rates 0.111± 0.011 and 1A synthetic experiment validating group-specific learning curves is left to the supplementary material. 2Details for computing statistically significant discrimination can be found in the supplementary material. 103 104 Training set size, n (log scale) 0.15 0.10 0.09 0.08 D iff er en ce , Γ (l og sc al e) False Positive Rate False Negative Rate (a) Group differences in false positive rates and false negative rates for a random forest classifier decrease with increasing training set size. Method Elow Eup group Mahalanobis – 0.29 men (Mahalanobis, 1936) – 0.13 women Bhattacharyya 0.001 0.040 men (Bhattacharyya, 1943) 0.001 0.027 women Nearest Neighbors 0.10 0.19 men (Cover & Hart, 1967) 0.04 0.07 women (b) Estimation of Bayes error lower and upper bounds (Elow and Eup) for zero-one loss of men and women. Intervals for men and women are non-overlapping for Nearest Neighbors. Figure 2: Discrimination level and noise estimation in income prediction with the Adult dataset. 0.033± 0.008. We focus on random forest classifiers, although we found similar results for logistic regression and decision trees. We examine the effect of varying training set size n on discrimination. We fit inverse power-law curves to estimates of FPR(Ŷ , n) and FNR(Ŷ , n) using repeated sample splitting where at least 20% of the full data is held out for evaluating generalization error at every value of n. We tune hyperparameters for each training set size for decision tree classifiers and logistic regression but tuned over the entire dataset for random forest. We include full training details in the supplementary material. Metrics are averaged over 50 trials. See Figure 2a for the results for random forests. Both FPR and FNR decrease with additional training samples. The discrimination level ΓFNR for false negatives decreases by a striking 40% when increasing the training set size from 1000 to 10,000. This suggests that trading off accuracy for fairness at small sample sizes may be ill-advised. Based on fitted power-law curves, we estimate that for unlimited training data drawn from the same distribution, we would have ΓFNR(Ŷ ) ≈ 0.04 and ΓFPR(Ŷ ) ≈ 0.08. In Figure 2b, we compare estimated upper and lower bounds on noise (Elow and Eup) for men and women using the Mahalanobis and Bhattacharyya distances (Devijver & Kittler, 1982), and a k-nearest neighbor method (Cover & Hart, 1967) with k = 5 and 5-fold cross validation. Men have consistently higher noise estimates than women, which is consistent with the differences in zero-one loss found using all models. For nearest neighbors estimates, intervals for men and women are non-overlapping, which suggests that noise may contribute substantially to discrimination. To guide attempts at reducing discrimination further, we identify clusters of individuals for whom false negative predictions are made at different rates between protected groups, with the method described in Section 4.2. We find that for individuals in executive or managerial occupations (12% of the sample), false negatives are more than twice as frequent for women (0.412) as for men (0.157). For individuals in all other occupations, the difference is significantly smaller, 0.543 for women and 0.461 for men, despite the fact that the disparity in outcome base rates in this cluster is large (0.26 for men versus 0.09 for women). A possible reason is that in managerial occupations the available variable set explains a larger portion of the variance in salary for men than for women. If so, further sub-categorization of managerial occupations could help reduce discrimination in prediction. 5.2 Intensive care unit mortality prediction Unstructured medical data such as clinical notes can reveal insights for questions like mortality prediction; however, disparities in predictive accuracy may result in discrimination of protected groups. Using the MIMIC-III dataset of all clinical notes from 25,879 adult patients from Beth Israel Deaconess Medical Center (Johnson et al., 2016), we predict hospital mortality of patients in critical care. Fairness is studied with respect to five self-reported ethnic groups of the following proportions: Asian (2.2%), Black (8.8%), Hispanic (3.4%), White (70.8%), and Other (14.8%). Notes were collected in the first 48 hours of an intensive care unit (ICU) stay; discharge notes were excluded. We only included patients that stayed in the ICU for more than 48 hours. We use the tf-idf statistics of the 10,000 most frequent words as features. Training a model on 50% of the data, selecting hyper-parameters on 25%, and testing on 25%, we find that logistic regression with L1-regularization achieves an AUC of 0.81. The logistic regression is well-calibrated with Brier scores ranging from 0.06-0.11 across the five groups; we note better calibration is correlated with lower prediction error. We report cost and discrimination level in terms of generalized zero-one loss (Pleiss et al., 2017). Using an ANOVA test (Fisher, 1925) with p < 0.001, we reject the null hypothesis that loss is the same among all five groups. To map the 95% confidence intervals, we perform pairwise comparisons of means using Tukey’s range test (Tukey, 1949) across 5-fold cross-validation. As seen in Figure 3a, patients in the Other and Hispanic groups have the highest and lowest generalized zero-one loss, respectively, with relatively few overlapping intervals. Notably, the largest ethnic group (White) does not have the best accuracy, whereas smaller ethnic groups tend towards extremes. While racial groups differ in hospital mortality base rates (Table 1 in the Supplementary material), Hispanic (10.3%) and Black (10.9%) patients have very different error rates despite similar base rates. To better understand the discrimination induced by our model, we explore the effect of changing training set size. To this end, we repeatedly subsample and split the data, holding out at least 20% of the full data for testing. In Figure 3b, we show loss averaged over 50 trials of training a logistic regression on increasingly larger training sets; estimated inverse power-law curves show good fits. We see that some pairwise differences in loss decrease with additional training data. Next, we identify clusters for which the difference in prediction errors between protected groups is large. We learn a topic model with k = 50 topics generated using Latent Dirichlet Allocation (Blei et al., 2003). Topics are concatenated into an n× k matrix Q where qic designates the proportion of topic c ∈ [k] in note i ∈ [n]. Following prior work on enrichment of topics in clinical notes (Marlin et al., 2012; Ghassemi et al., 2014), we estimate the probability of patient mortality Y given a topic c as p̂(Y |C = c) := ( ∑n i=1 yiqic)/( ∑n i=1 qic) where yi is the hospital mortality of patient i. We compare relative error rates given protected group and topic using binary predicted mortality ŷi, actual mortality yi, and group ai for patient i through p̂(Ŷ 6= Y | A = a′, C = c) = ∑n i=1 1(yi 6= ŷi)1(ai = a′)qic∑n i=1 1(ai = a ′)qic which follows using substitution and conditioning on A. These error rates were computed using a logistic regression with L1 regularization using an 80/20 train-test split over 50 trials. While many topics have consistent error rates across groups, some topics (e.g. cardiac patients or cancer patients as shown in Figure 3c) have large differences in error rates across groups. We include more detailed topic descriptions in the supplementary material. Once we have identified a subpopulation with particularly high error, for example cancer patients, we can consider collecting more features or collecting more data from the same data distribution. We find that error rates differ between 0.12 and 0.30 across protected groups of cancer patients, and between 0.05 and 0.20 for cardiac patients. 5.3 Book review ratings In the supplementary material, we study prediction of book review ratings from review texts (Gnanesh, 2017). The protected attribute was chosen to be the gender of the author as determined from Wikipedia. In the dataset, the difference in mean-squared error ΓMSE(Ŷ ) has 95%-confidence interval 0.136 ± 0.048 with MSEM = 0.224 for reviews for male authors and MSEF = 0.358. Strikingly, our findings suggest that ΓMSE(Ŷ ) may be completely eliminated by additional targeted sampling of the less represented gender. 6 Discussion We identify that existing approaches for reducing discrimination induced by prediction errors may be unethical or impractical to apply in settings where predictive accuracy is critical, such as in healthcare or criminal justice. As an alternative, we propose a procedure for analyzing the different sources contributing to discrimination. Decomposing well-known definitions of cost-based fairness criteria in terms of differences in bias, variance, and noise, we suggest methods for reducing each term through model choice or additional training data collection. Case studies on three real-world datasets confirm that collection of additional samples is often sufficient to improve fairness, and that existing post-hoc methods for reducing discrimination may unnecessarily sacrifice predictive accuracy when other solutions are available. Looking forward, we can see several avenues for future research. In this work, we argue that identifying clusters or subpopulations with high predictive disparity would allow for more targeted ways to reduce discrimination. We encourage future research to dig deeper into the question of local or context-specific unfairness in general, and into algorithms for addressing it. Additionally, extending our analysis to intersectional fairness (Buolamwini & Gebru, 2018; Hébert-Johnson et al., 2017), e.g. looking at both gender and race or all subdivisions, would provide more nuanced grappling with unfairness. Finally, additional data collection to improve the model may cause unexpected delayed impacts (Liu et al., 2018) and negative feedback loops (Ensign et al., 2017) as a result of distributional shifts in the data. More broadly, we believe that the study of fairness in non-stationary populations is an interesting direction to pursue. Acknowledgements The authors would like to thank Yoni Halpern and Hunter Lang for helpful comments, and Zeshan Hussain for clinical guidance. This work was partially supported by Office of Naval Research Award No. N00014-17-1-2791 and NSF CAREER award #1350965.
1. What is the main contribution of the paper regarding discrimination in machine learning? 2. What are the strengths of the paper, particularly in its conceptual contribution? 3. Are there any concerns regarding the technical contribution of the paper? 4. How does the paper reconcile the difference between the definitions of hat{Y} in line 61 and Figure 1? 5. Can the authors provide more explanation or references to help readers understand the terms Bias and Variance in the loss decomposition? 6. Would discussing other clustering techniques to identify regions of high discrimination enhance the paper's value? 7. How do the results extend to the statistical parity notion of fairness? 8. Can the authors discuss the potential interaction between the proposed strategy of gathering more data and the delayed-impact-of-fairness framework proposed in Liu et al's study?
Review
Review The paper focuses on discrimination in machine learning. As opposed to the existing work on discrimination-aware machine learning, the goal of the paper is to actually go beyond the traditional discrimination-accuracy tradeoffs. Instead, the paper proposes a framework to pinpoint the precise sources of discrimination in prediction outcomes, and propose interventions that might enhance both fairness as well as the accuracy of the model. As the paper notes, just constraining the existing (discriminatory) models to remove outcome disparities may not be sufficient. Decreasing accuracy of (one, or) both groups just to achieve equality is definitely an unsatisfactory tradeoff. While previous authors alluded to the possibility of gathering more data to potentially reduce discrimination (https://arxiv.org/abs/1701.08230 and https://arxiv.org/abs/1610.02413), this paper is the first (to the best of the knowledge of this reviewer) to formally tackle this extremely important direction. While the technical contribution is not necessarily huge, the conceptual contribution of the paper would definitely make it a valuable addition to the existing literature on the topic. Detailed comments: - hat{Y} in line 61 is defined as a function of both X and A. However, in Figure 1, both the outcomes as well as predictions are thought to be unaware of A (same as the experimental section). How does one reconcile this difference? Does this have any implications for the analysis that follows? - Figure 1 and Section 3 (until line 113) are very difficult to read. The paper does briefly state that it uses the loss decomposition of (Domingos, 2000), but for a reader not familiar with this framework, it is not entirely clear as to what precisely the Bias and Variance terms defined here are trying to measure. Domingos does provide intuitive explanations for these terms. Perhaps the authors can expand a bit more on these terms, or point the reader to the relevant publication (https://homes.cs.washington.edu/~pedrod/papers/mlc00a.pdf). Fixing this issue can greatly increase the readability of the paper. - It would be nice to see discussion on some other clustering techniques to identify regions where the discrimination between the groups is high. Using uncertainty estimation (e.g., in Gaussian processes) might be one way to do that. - How would the results extend to the statistical parity notion of fairness (https://arxiv.org/abs/1705.09055)? - While not necessary for this submission, it would be great to see some discussion on how the results would extend to the cases of intersectional fairness (https://arxiv.org/abs/1711.05144) -- the cases when one has more fine-grained groups such as gender and race (e.g., African-American men, White women) - Though not necessary for the current submission, it would be interesting to see how the proposed strategy of gathering more data (examples or features) would interact with the delayed-impact-of-fairness framework proposed in the recent study by Liu et al (https://arxiv.org/pdf/1803.04383.pdf). Specifically, is gathering more data on the discriminated group likely to lessen the long term stagnation or decline of these groups?
NIPS
Title Why Is My Classifier Discriminatory? Abstract Recent attempts to achieve fairness in predictive models focus on the balance between fairness and accuracy. In sensitive applications such as healthcare or criminal justice, this trade-off is often undesirable as any increase in prediction error could have devastating consequences. In this work, we argue that the fairness of predictions should be evaluated in context of the data, and that unfairness induced by inadequate samples sizes or unmeasured predictive variables should be addressed through data collection, rather than by constraining the model. We decompose cost-based metrics of discrimination into bias, variance, and noise, and propose actions aimed at estimating and reducing each term. Finally, we perform case-studies on prediction of income, mortality, and review ratings, confirming the value of this analysis. We find that data collection is often a means to reduce discrimination without sacrificing accuracy. 1 Introduction As machine learning algorithms increasingly affect decision making in society, many have raised concerns about the fairness and biases of these algorithms, especially in applications to healthcare or criminal justice, where human lives are at stake (Angwin et al., 2016; Barocas & Selbst, 2016). It is often hoped that the use of automatic decision support systems trained on observational data will remove human bias and improve accuracy. However, factors such as data quality and model choice may encode unintentional discrimination, resulting in systematic disparate impact. We study fairness in prediction of outcomes such as recidivism, annual income, or patient mortality. Fairness is evaluated with respect to protected groups of individuals defined by attributes such as gender or ethnicity (Ruggieri et al., 2010). Following previous work, we measure discrimination in terms of differences in prediction cost across protected groups (Calders & Verwer, 2010; Dwork et al., 2012; Feldman et al., 2015). Correcting for issues of data provenance and historical bias in labels is outside of the scope of this work. Much research has been devoted to constraining models to satisfy cost-based fairness in prediction, as we expand on below. The impact of data collection on discrimination has received comparatively little attention. Fairness in prediction has been encouraged by adjusting models through regularization (Bechavod & Ligett, 2017; Kamishima et al., 2011), constraints (Kamiran et al., 2010; Zafar et al., 2017), and representation learning (Zemel et al., 2013). These attempts can be broadly categorized as modelbased approaches to fairness. Others have applied data preprocessing to reduce discrimination (Hajian & Domingo-Ferrer, 2013; Feldman et al., 2015; Calmon et al., 2017). For an empirical comparison, see for example Friedler et al. (2018). Inevitably, however, restricting the model class or perturbing training data to improve fairness may harm predictive accuracy (Corbett-Davies et al., 2017). A tradeoff of predictive accuracy for fairness is sometimes difficult to motivate when predictions influence high-stakes decisions. In particular, post-hoc correction methods based on randomizing predictions (Hardt et al., 2016; Pleiss et al., 2017) are unjustifiable for ethical reasons in clinical tasks 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. such as severity scoring. Moreover, as pointed out by Woodworth et al. (2017), post-hoc correction may lead to suboptimal predictive accuracy compared to other equally fair classifiers. Disparate predictive accuracy can often be explained by insufficient or skewed sample sizes or inherent unpredictability of the outcome given the available set of variables. With this in mind, we propose that fairness of predictive models should be analyzed in terms of model bias, model variance, and outcome noise before they are constrained to satisfy fairness criteria. This exposes and separates the adverse impact of inadequate data collection and the choice of the model on fairness. The cost of fairness need not always be one of predictive accuracy, but one of investment in data collection and model development. In high-stakes applications, the benefits often outweigh the costs. In this work, we use the term “discrimination" to refer to specific kinds of differences in the predictive power of models when applied to different protected groups. In some domains, such differences may not be considered discriminatory, and it is critical that decisions made based on this information are sensitive to this fact. For example, in prior work, researchers showed that causal inference may help uncover which sources of differences in predictive accuracy introduce unfairness (Kusner et al., 2017). In this work, we assume that observed differences are considered discriminatory and discuss various means of explaining and reducing them. Main contributions We give a procedure for analyzing discrimination in predictive models with respect to cost-based definitions of group fairness, emphasizing the impact of data collection. First, we propose the use of bias-variance-noise decompositions for separating sources of discrimination. Second, we suggest procedures for estimating the value of collecting additional training samples. Finally, we propose the use of clustering for identifying subpopulations that are discriminated against to guide additional variable collection. We use these tools to analyze the fairness of common learning algorithms in three tasks: predicting income based on census data, predicting mortality of patients in critical care, and predicting book review ratings from text. We find that the accuracy in predictions of the mortality of cancer patients vary by as much as 20% between protected groups. In addition, our experiments confirm that discrimination level is sensitive to the quality of the training data. 2 Background We study fairness in prediction of an outcome Y ∈ Y . Predictions are based on a set of covariates X ∈ X ⊆ Rk and a protected attribute A ∈ A. In mortality prediction, X represents the medical history of a patient in critical care, A the self-reported ethnicity, and Y mortality. A model is considered fair if its errors are distributed similarly across protected groups, as measured by a cost function γ. Predictions learned from a training set d are denoted Ŷd := h(X,A) for some h : X × A → Y from a class H. The protected attribute is assumed to be binary, A = {0, 1}, but our results generalize to the non-binary case. A dataset d = {(xi, ai, yi)}ni=1 consists of n samples distributed according to p(X,A, Y ). When clear from context, we drop the subscript from Ŷd. A popular cost-based definition of fairness is the equalized odds criterion, which states that a binary classifier Ŷ is fair if its false negative rates (FNR) and false positive rates (FPR) are equal across groups (Hardt et al., 2016). We define FPR and FNR with respect to protected group a ∈ A by FPRa(Ŷ ) := EX [Ŷ | Y = 0, A = a], FNRa(Ŷ ) := EX [1− Ŷ | Y = 1, A = a] . Exact equality, FPR0(Ŷ ) = FPR1(Ŷ ), is often hard to verify or enforce in practice. Instead, we study the degree to which such constraints are violated. More generally, we use differences in cost functions γa between protected groups a ∈ A to define the level of discrimination Γ, Γγ(Ŷ ) := ∣∣∣γ0(Ŷ )− γ1(Ŷ )∣∣∣ . (1) In this work we study cost functions γa ∈ {FPRa,FNRa,ZOa} in binary classification tasks, with ZOa(Ŷ ) := EX [1[Ŷ 6= Y ] | A = a] the zero-one loss. In regression problems, we use the groupspecific mean-squared error MSEa := EX [(Ŷ − Y )2 | A = a]. According to (1), predictions Ŷ satisfy equalized odds on d if ΓFPR(Ŷ ) = 0 and ΓFNR(Ŷ ) = 0. Calibration and impossibility A score-based classifier is calibrated if the prediction score assigned to a unit equals the fraction of positive outcomes for all units assigned similar scores. It 1. .5 $(& ∣ ( = 1)$(& ∣ ( = 0) $(, ∣ &) Samples 4 5 (a) For identically distributed protected groups and unaware outcome (see below), bias and noise are equal in expectation. Perceived discrimination is only due to variance. !(# ∣ % = 0) 1. .5 !(, ∣ #) !(# ∣ % = 1) High noise Low noise 8 9 (b) Heteroskedastic noise, i.e. ∃x, x′ : N(x) 6= N(x′), may contribute to discrimination even for an optimal model if protected groups are not identically distributed. 1. .5 $(& ∣ () $(( ∣ * = 1) $(( ∣ * = 0) &- . / (c) One choice of model may be more suited for one protected group, even under negligible noise and variance, resulting in a difference in expected bias, B0 6= B1. Figure 1: Scenarios illustrating how properties of the training set and model choice affect perceived discrimination in a binary classification task, under the assumption that outcomes and predictions are unaware, i.e. p(Y | X,A) = p(Y | X) and p(Ŷ | X,A) = p(Ŷ | X). Through bias-variance-noise decompositions (see Section 3.1), we can identify which of these dominate in their effect on fairness. We propose procedures for addressing each component in Section 4, and use them in experiments (see Section 5) to mitigate discrimination in income prediction and prediction of ICU mortality. is impossible for a classifier to be calibrated in every protected group and satisfy multiple costbased fairness criteria at once, unless accuracy is perfect or base rates of outcomes are equal across groups (Chouldechova, 2017). A relaxed version of this result (Kleinberg et al., 2016) applies to the discrimination level Γ. Inevitably, both constraint-based methods and our approach are faced with a choice between which fairness criteria to satisfy, and at what cost. 3 Sources of perceived discrimination There are many potential sources of discrimination in predictive models. In particular, the choice of hypothesis class H and learning objective has received a lot of attention (Calders & Verwer, 2010; Zemel et al., 2013; Fish et al., 2016). However, data collection—the chosen set of predictive variables X , the sampling distribution p(X,A, Y ), and the training set size n—is an equally integral part of deploying fair machine learning systems in practice, and it should be guided to promote fairness. Below, we tease apart sources of discrimination through bias-variance-noise decompositions of cost-based fairness criteria. In general, we may think of noise in the outcome as the effect of a set of unobserved variables U , potentially interacting with X . Even the optimal achievable error for predictions based on X may be reduced further by observing parts of U . In Figure 1, we illustrate three common learning scenarios and study their fairness properties through bias, variance, and noise. To account for randomness in the sampling of training sets, we redefine discrimination level (1) in terms of the expected cost γa(Ŷ ) := ED[γa(ŶD)] over draws of a random training set D. Definition 1. The expected discrimination level Γ(Ŷ ) of a predictive model Ŷ learned from a random training set D, is Γ(Ŷ ) := ∣∣∣ED [γ0(ŶD)− γ1(ŶD)]∣∣∣ = ∣∣∣γ0(Ŷ )− γ1(Ŷ )∣∣∣ . Γ(Ŷ ) is not observed in practice when only a single training set d is available. If n is small, it is recommended to estimate Γ through re-sampling methods such as bootstrapping (Efron, 1992). 3.1 Bias-variance-noise decompositions of discrimination level An algorithm that learns models ŶD from datasets D is given, and the covariates X and size of the training data n are fixed. We assume that ŶD is a deterministic function ŷD(x, a) given the training set D, e.g. a thresholded scoring function. Following Domingos (2000), we base our analysis on decompositions of loss functions L evaluated at points (x, a). For decompositions of costs γa ∈ {ZO,FPR,FNR} we let this be the zero-one loss, L(y, y′) = 1[y 6= y′] , and for γa = MSE, the squared loss, L(y, y′) = (y − y′)2. We define the main prediction ỹ(x, a) = arg miny′ ED[L(ŶD, y ′) | X = x,A = a] as the average prediction over draws of training sets for the squared loss, and the majority vote for the zero-one loss. The (Bayes) optimal prediction y∗(x, a) = arg miny′ EY [L(Y, y ′) | X = x,A = a] achieves the smallest expected error with respect to the random outcome Y . Definition 2 (Bias, variance and noise). Following Domingos (2000), we define bias B, variance V and noise N at a point (x, a) below. B(Ŷ , x, a) = L(y∗(x, a), ỹ(x, a)) N(x, a) = EY [L(y∗(x, a), Y ) | X = x,A = a] V (Ŷ , x, a) = ED[L(ỹ(x, a), ŷD(x, a))] . (2) Here, y∗, ŷ and ỹ, are all deterministic functions of (x, a), while Y is a random variable. In words, the bias B is the loss incurred by the main prediction relative to the optimal prediction. The variance V is the average loss incurred by the predictions learned from different datasets relative to the main prediction. The noise N is the remaining loss independent of the learning algorithm, often known as the Bayes error. We use these definitions to decompose Γ under various definitions of γa. Theorem 1. With γa the group-specific zero-one loss or class-conditional versions (e.g. FNR, FPR), or the mean squared error, γa and the discrimination level Γ admit decompositions of the form γa(Ŷ ) = Na︸︷︷︸ Noise +Ba(Ŷ )︸ ︷︷ ︸ Bias + V a(Ŷ )︸ ︷︷ ︸ Variance and Γ = ∣∣(N0 −N1) + (B0 −B1) + (V 0 − V 1)∣∣ where we leave out Ŷ in the decomposition of Γ for brevity. With B, V defined as in (2), we have Ba(Ŷ ) = EX [B(ỹ, X, a) | A = a] and V a(Ŷ ) = EX,D[cv(X)V (ŶD, X, a) | A = a] . For the zero-one loss, cv(x, a) = 1 if ŷm(x, a) = y∗(x, a), otherwise cv(x, a) = −1. For the squared loss cv(x, a) = 1. The noise term for population losses is Na := EX [cn(X, a)L(y ∗(X, a), Y ) | A = a] and for class-conditional losses w.r.t class y ∈ {0, 1}, Na(y) := EX [cn(X, a)L(y ∗(X, a), y) | A = a, Y = y] . For the zero-one loss, and class-conditional variants, cn(x, a) = 2ED[1[ŷD(x, a) = y∗(x, a)]]− 1 and for the squared loss, cn(x, a) = 1. Proof sketch. Conditioning and exchanging order of expectation, the cases of mean squared error and zero-one losses follow from Domingos (2000). Class-conditional losses follow from a case-by-case analysis of possible errors. See the supplementary material for a full proof. Theorem 1 points to distinct sources of perceived discrimination. Significant differences in bias B0 − B1 indicate that the chosen model class is not flexible enough to fit both protected groups well (see Figure 1c). This is typical of (misspecified) linear models which approximate non-linear functions well only in small regions of the input space. Regularization or post-hoc correction of models effectively increase the bias of one of the groups, and should be considered only if there is reason to believe that the original bias is already minimal. Differences in variance, V 0 − V 1, could be caused by differences in sample sizes n0, n1 or groupconditional feature variance Var(X | A), combined with a high capacity model. Targeted collection of training samples may help resolve this issue. Our decomposition does not apply to post-hoc randomization methods (Hardt et al., 2016) but we may treat these in the same way as we do random training sets and interpret them as increasing the variance V a of one group to improve fairness. When noise is significantly different between protected groups, discrimination is partially unrelated to model choice and training set size and may only be reduced by measuring additional variables. Proposition 1. If N0 6= N1, no model can be 0-discriminatory in expectation without access to additional information or increasing bias or variance w.r.t. to the Bayes optimal classifier. Proof. By definition, Γ = 0 =⇒ (N1 −N0) = (B0 −B1) + (V 0 − V 1). As the Bayes optimal classifier has neither bias nor variance, the result follows immediately. In line with Proposition 1, most methods for ensuring algorithmic fairness reduce discrimination by trading off a difference in noise for one in bias or variance. However, this trade-off is only motivated if the considered predictive model is close to Bayes optimal and no additional predictive variables may be measured. Moreover, if noise is homoskedastic in regression settings, post-hoc randomization is ill-advised, as the difference in Bayes error N0 −N1 is zero, and discrimination is caused only by model bias or variance (see the supplementary material for a proof). Estimating bias, variance and noise Group-specific variance V a may be estimated through sample splitting or bootstrapping (Efron, 1992). In contrast, the noise Na and bias Ba are difficult to estimate whenX is high-dimensional or continuous. In fact, no convergence results of noise estimates may be obtained without further assumptions on the data distribution (Antos et al., 1999). Under some such assumptions, noise may be approximately estimated using distance-based methods (Devijver & Kittler, 1982), nearest-neighbor methods (Fukunaga & Hummels, 1987; Cover & Hart, 1967), or classifier ensembles (Tumer & Ghosh, 1996). When comparing the discrimination level of two different models, noise terms cancel, as they are independent of the model. As a result, differences in bias may be estimated even when the noise is not known (see the supplementary material). Testing for significant discrimination When sample sizes are small, perceived discrimination may not be statistically significant. In the supplementary material, we give statistical tests both for the discrimination level Γ(Ŷ ) and the difference in discrimination level between two models Ŷ , Ŷ ′. 4 Reducing discrimination through data collection In light of the decomposition of Theorem 1, we explore avenues for reducing group differences in bias, variance, and noise without sacrificing predictive accuracy. In practice, predictive accuracy is often artificially limited when data is expensive or impractical to collect. With an investment in training samples or measurement of predictive variables, both accuracy and fairness may be improved. 4.1 Increasing training set size Standard regularization used to avoid overfitting is not guaranteed to improve or preserve fairness. An alternative route is to collect more training samples and reduce the impact of the bias-variance trade-off. When supplementary data is collected from the same distribution as the existing set, covariate shift may be avoided (Quionero-Candela et al., 2009). This is often achievable; labeled data may be expensive, such as when paying experts to label observations, but given the means to acquire additional labels, they would be drawn from the original distribution. To estimate the value of increasing sample size, we predict the discrimination level Γ(ŶD) as D increases in size. The curve measuring generalization performance of predictive models as a function of training set size n is called a Type II learning curve (Domhan et al., 2015). We call γa(Ŷ , n) := E[γa(ŶDn)], as a function of n, the learning curve with respect to protected group a. We define the discrimination learning curve Γ(Ŷ , n) := |γ0(Ŷ , n) − γ1(Ŷ , n)| (see Figure 2a for an example). Empirically, learning curves behave asymptotically as inverse power-law curves for diverse algorithms such as deep neural networks, support vector machines, and nearest-neighbor classifiers, even when model capacity is allowed to grow with n (Hestness et al., 2017; Mukherjee et al., 2003). This observation is also supported by theoretical results (Amari, 1993). Assumption 1 (Learning curves). The population prediction loss γ(Ŷ , n), and group-specific losses γ0(Ŷ , n), γ1(Ŷ , n), for a fixed learning algorithm Ŷ , behave asymptotically as inverse power-law curves with parameters (α, β, δ). That is, ∃M,M0,M1 such that for n ≥M,na ≥Ma, γ(Ŷ , n) = αn−β + δ and ∀a ∈ A : γa(Ŷ , na) = αan−βaa + δa (3) Intercepts, δ, δa in (3) represent the asymptotic bias B(ŶD∞) and the Bayes error N , with the former vanishing for consistent estimators. Accurately estimating δ from finite samples is often challenging as the first term tends to dominate the learning curve for practical sample sizes. In experiments, we find that the inverse power-laws model fit group conditional (γa) and classconditional (FPR, FNR) errors well, and use these to extrapolate Γ(Ŷ , n) based on estimates from subsampled data. 4.2 Measuring additional variables When discrimination Γ is dominated by a difference in noise, N0−N1, fairness may not be improved through model selection alone without sacrificing accuracy (see Proposition 1). Such a scenario is likely when available covariates are not equally predictive of the outcome in both groups. We propose identification of clusters of individuals in which discrimination is high as a means to guide further variable collection—if the variance in outcomes within a cluster is not explained by the available feature set, additional variables may be used to further distinguish its members. Let a random variable C represent a (possibly stochastic) clustering such that C = c indicates membership in cluster c. Then let ρa(c) denote the expected prediction cost for units in cluster c with protected attribute a. As an example, for the zero-one loss we let ρZOa (c) := EX [1[Ŷ 6= Y ] | A = a,C = c], and define ρ analogously for false positives or false negatives. Clusters c for which |ρ0(c)− ρ1(c)| is large identify groups of individuals for which discrimination is worse than average, and can guide targeted collection of additional variables or samples. In our experiments on income prediction, we consider particularly simple clusterings of data defined by subjects with measurements above or below the average value of a single feature x(c) with c ∈ {1, . . . , k}. In mortality prediction, we cluster patients using topic modeling. As measuring additional variables is expensive, the utility of a candidate set should be estimated before collecting a large sample (Koepke & Bilenko, 2012). 5 Experiments We analyze the fairness properties of standard machine learning algorithms in three tasks: prediction of income based on national census data, prediction of patient mortality based on clinical notes, and prediction of book review ratings based on review text.1 We disentangle sources of discrimination by assessing the level of discrimination for the full data,estimating the value of increasing training set size by fitting Type II learning curves, and using clustering to identify subgroups where discrimination is high. In addition, we estimate the Bayes error through non-parametric techniques. In our experiments, we omit the sensitive attribute A from our classifiers to allow for closer comparison to previous works, e.g. Hardt et al. (2016); Zafar et al. (2017). In preliminary results, we found that fitting separate classifiers for each group increased the error rates of both groups due to the resulting smaller sample size, as classifiers could not learn from other groups. As our model objective is to maximize accuracy over all data points, our analysis uses a single classifier trained on the entire population. 5.1 Income prediction Predictions of a person’s salary may be used to help determine an individual’s market worth, but systematic underestimation of the salary of protected groups could harm their competitiveness on the job market. The Adult dataset in the UCI Machine Learning Repository (Lichman, 2013) contains 32,561 observations of yearly income (represented as a binary outcome: over or under $50,000) and twelve categorical or continuous features including education, age, and marital status. Categorical attributes are dichotomized, resulting in a total of 105 features. We follow Pleiss et al. (2017) and strive to ensure fairness across genders, which is excluded as a feature from the predictive models. Using an 80/20 train-test split, we learn a random forest predictor, which is is well-calibrated for both groups (Brier (1950) scores of 0.13 and 0.06 for men and women). We find the difference in zero-one loss ΓZO(Ŷ ) has a 95%-confidence interval2 .085±.069 with decision thresholds at 0.5. At this threshold, the false negative rates are 0.388±0.026 and 0.448± 0.064 for men and women respectively, and the false positive rates 0.111± 0.011 and 1A synthetic experiment validating group-specific learning curves is left to the supplementary material. 2Details for computing statistically significant discrimination can be found in the supplementary material. 103 104 Training set size, n (log scale) 0.15 0.10 0.09 0.08 D iff er en ce , Γ (l og sc al e) False Positive Rate False Negative Rate (a) Group differences in false positive rates and false negative rates for a random forest classifier decrease with increasing training set size. Method Elow Eup group Mahalanobis – 0.29 men (Mahalanobis, 1936) – 0.13 women Bhattacharyya 0.001 0.040 men (Bhattacharyya, 1943) 0.001 0.027 women Nearest Neighbors 0.10 0.19 men (Cover & Hart, 1967) 0.04 0.07 women (b) Estimation of Bayes error lower and upper bounds (Elow and Eup) for zero-one loss of men and women. Intervals for men and women are non-overlapping for Nearest Neighbors. Figure 2: Discrimination level and noise estimation in income prediction with the Adult dataset. 0.033± 0.008. We focus on random forest classifiers, although we found similar results for logistic regression and decision trees. We examine the effect of varying training set size n on discrimination. We fit inverse power-law curves to estimates of FPR(Ŷ , n) and FNR(Ŷ , n) using repeated sample splitting where at least 20% of the full data is held out for evaluating generalization error at every value of n. We tune hyperparameters for each training set size for decision tree classifiers and logistic regression but tuned over the entire dataset for random forest. We include full training details in the supplementary material. Metrics are averaged over 50 trials. See Figure 2a for the results for random forests. Both FPR and FNR decrease with additional training samples. The discrimination level ΓFNR for false negatives decreases by a striking 40% when increasing the training set size from 1000 to 10,000. This suggests that trading off accuracy for fairness at small sample sizes may be ill-advised. Based on fitted power-law curves, we estimate that for unlimited training data drawn from the same distribution, we would have ΓFNR(Ŷ ) ≈ 0.04 and ΓFPR(Ŷ ) ≈ 0.08. In Figure 2b, we compare estimated upper and lower bounds on noise (Elow and Eup) for men and women using the Mahalanobis and Bhattacharyya distances (Devijver & Kittler, 1982), and a k-nearest neighbor method (Cover & Hart, 1967) with k = 5 and 5-fold cross validation. Men have consistently higher noise estimates than women, which is consistent with the differences in zero-one loss found using all models. For nearest neighbors estimates, intervals for men and women are non-overlapping, which suggests that noise may contribute substantially to discrimination. To guide attempts at reducing discrimination further, we identify clusters of individuals for whom false negative predictions are made at different rates between protected groups, with the method described in Section 4.2. We find that for individuals in executive or managerial occupations (12% of the sample), false negatives are more than twice as frequent for women (0.412) as for men (0.157). For individuals in all other occupations, the difference is significantly smaller, 0.543 for women and 0.461 for men, despite the fact that the disparity in outcome base rates in this cluster is large (0.26 for men versus 0.09 for women). A possible reason is that in managerial occupations the available variable set explains a larger portion of the variance in salary for men than for women. If so, further sub-categorization of managerial occupations could help reduce discrimination in prediction. 5.2 Intensive care unit mortality prediction Unstructured medical data such as clinical notes can reveal insights for questions like mortality prediction; however, disparities in predictive accuracy may result in discrimination of protected groups. Using the MIMIC-III dataset of all clinical notes from 25,879 adult patients from Beth Israel Deaconess Medical Center (Johnson et al., 2016), we predict hospital mortality of patients in critical care. Fairness is studied with respect to five self-reported ethnic groups of the following proportions: Asian (2.2%), Black (8.8%), Hispanic (3.4%), White (70.8%), and Other (14.8%). Notes were collected in the first 48 hours of an intensive care unit (ICU) stay; discharge notes were excluded. We only included patients that stayed in the ICU for more than 48 hours. We use the tf-idf statistics of the 10,000 most frequent words as features. Training a model on 50% of the data, selecting hyper-parameters on 25%, and testing on 25%, we find that logistic regression with L1-regularization achieves an AUC of 0.81. The logistic regression is well-calibrated with Brier scores ranging from 0.06-0.11 across the five groups; we note better calibration is correlated with lower prediction error. We report cost and discrimination level in terms of generalized zero-one loss (Pleiss et al., 2017). Using an ANOVA test (Fisher, 1925) with p < 0.001, we reject the null hypothesis that loss is the same among all five groups. To map the 95% confidence intervals, we perform pairwise comparisons of means using Tukey’s range test (Tukey, 1949) across 5-fold cross-validation. As seen in Figure 3a, patients in the Other and Hispanic groups have the highest and lowest generalized zero-one loss, respectively, with relatively few overlapping intervals. Notably, the largest ethnic group (White) does not have the best accuracy, whereas smaller ethnic groups tend towards extremes. While racial groups differ in hospital mortality base rates (Table 1 in the Supplementary material), Hispanic (10.3%) and Black (10.9%) patients have very different error rates despite similar base rates. To better understand the discrimination induced by our model, we explore the effect of changing training set size. To this end, we repeatedly subsample and split the data, holding out at least 20% of the full data for testing. In Figure 3b, we show loss averaged over 50 trials of training a logistic regression on increasingly larger training sets; estimated inverse power-law curves show good fits. We see that some pairwise differences in loss decrease with additional training data. Next, we identify clusters for which the difference in prediction errors between protected groups is large. We learn a topic model with k = 50 topics generated using Latent Dirichlet Allocation (Blei et al., 2003). Topics are concatenated into an n× k matrix Q where qic designates the proportion of topic c ∈ [k] in note i ∈ [n]. Following prior work on enrichment of topics in clinical notes (Marlin et al., 2012; Ghassemi et al., 2014), we estimate the probability of patient mortality Y given a topic c as p̂(Y |C = c) := ( ∑n i=1 yiqic)/( ∑n i=1 qic) where yi is the hospital mortality of patient i. We compare relative error rates given protected group and topic using binary predicted mortality ŷi, actual mortality yi, and group ai for patient i through p̂(Ŷ 6= Y | A = a′, C = c) = ∑n i=1 1(yi 6= ŷi)1(ai = a′)qic∑n i=1 1(ai = a ′)qic which follows using substitution and conditioning on A. These error rates were computed using a logistic regression with L1 regularization using an 80/20 train-test split over 50 trials. While many topics have consistent error rates across groups, some topics (e.g. cardiac patients or cancer patients as shown in Figure 3c) have large differences in error rates across groups. We include more detailed topic descriptions in the supplementary material. Once we have identified a subpopulation with particularly high error, for example cancer patients, we can consider collecting more features or collecting more data from the same data distribution. We find that error rates differ between 0.12 and 0.30 across protected groups of cancer patients, and between 0.05 and 0.20 for cardiac patients. 5.3 Book review ratings In the supplementary material, we study prediction of book review ratings from review texts (Gnanesh, 2017). The protected attribute was chosen to be the gender of the author as determined from Wikipedia. In the dataset, the difference in mean-squared error ΓMSE(Ŷ ) has 95%-confidence interval 0.136 ± 0.048 with MSEM = 0.224 for reviews for male authors and MSEF = 0.358. Strikingly, our findings suggest that ΓMSE(Ŷ ) may be completely eliminated by additional targeted sampling of the less represented gender. 6 Discussion We identify that existing approaches for reducing discrimination induced by prediction errors may be unethical or impractical to apply in settings where predictive accuracy is critical, such as in healthcare or criminal justice. As an alternative, we propose a procedure for analyzing the different sources contributing to discrimination. Decomposing well-known definitions of cost-based fairness criteria in terms of differences in bias, variance, and noise, we suggest methods for reducing each term through model choice or additional training data collection. Case studies on three real-world datasets confirm that collection of additional samples is often sufficient to improve fairness, and that existing post-hoc methods for reducing discrimination may unnecessarily sacrifice predictive accuracy when other solutions are available. Looking forward, we can see several avenues for future research. In this work, we argue that identifying clusters or subpopulations with high predictive disparity would allow for more targeted ways to reduce discrimination. We encourage future research to dig deeper into the question of local or context-specific unfairness in general, and into algorithms for addressing it. Additionally, extending our analysis to intersectional fairness (Buolamwini & Gebru, 2018; Hébert-Johnson et al., 2017), e.g. looking at both gender and race or all subdivisions, would provide more nuanced grappling with unfairness. Finally, additional data collection to improve the model may cause unexpected delayed impacts (Liu et al., 2018) and negative feedback loops (Ensign et al., 2017) as a result of distributional shifts in the data. More broadly, we believe that the study of fairness in non-stationary populations is an interesting direction to pursue. Acknowledgements The authors would like to thank Yoni Halpern and Hunter Lang for helpful comments, and Zeshan Hussain for clinical guidance. This work was partially supported by Office of Naval Research Award No. N00014-17-1-2791 and NSF CAREER award #1350965.
1. What is the main contribution of the paper regarding fairness measures? 2. What are the strengths and weaknesses of the proposed approach in addressing disparities in fairness measures? 3. How does the reviewer assess the clarity, originality, significance, and impact of the paper's content? 4. Does the reviewer have any concerns or suggestions regarding the paper's language and terminology usage, particularly in reference to the term "discrimination"?
Review
Review Summary/Contribution This paper shows that disparities in many fairness measures can be decomposed into “bias” (loss caused by deviating from the best feasible classifier), “variance” (loss caused by randomness in the dataset), and “noise” (the loss of the best feasible classifier). In doing so it illuminates some important issues that need to be considered when attempting to improve group fairness measures. For example, many people suspect that high error rates (either FPR or FNR) for a minority group is at least partially due to a lack of minority data. This work shows that these error rates can persist even with plentiful data (ie when the contribution of “variance” to the error rate differences is zero). Crucially, this paper recognizes that “existing approaches for reducing discrimination induced by prediction errors may be unethical or impractical to apply in settings where predictive accuracy is critical, such as in healthcare or criminal justice,” and proposes concrete means to improve algorithms without bearing this unacceptable cost. Weaknesses: The biggest weakness of this paper is its use of the word “discrimination” to refer to differences in cost functions between groups. The authors provide no justification for why differences in (for example) zero-one loss should be termed “discriminatory,” evoking injustice and animus in the reader’s mind. A classifier that tries to predict who will become an NFL quarterback will have a worse zero-one loss for men than women (since no woman has ever played in the NFL), but this is absolutely not discrimination. I’d give this paper an "overall score" of 8 if they used a more value-neutral term (such as “disparities” or “differences”) instead of “discrimination”. The authors also suggest a method for determining where to expend effort to produce better models, which looks at subgroups where error rates differ by protected characteristics. For example, they find that their model seems to make far fewer errors on Hispanic cancer patients than cancer patients of other races. I’d be interested to see whether this effect was due to differences in base rates within the subgroups. One group might be easier to classify simply because their base rate is closer to zero or one, but this doesn’t necessarily suggest the existence of extra features to reduce noise (and therefore the accuracy gap). Clarity: This paper is well written and the math is clear. Originality: This paper is clearly original, although other papers have hinted at similar high-level results. Significance: This paper contains some valuable insights that could be readily applied to existing machine learning workflows, a rare achievement for FATML papers. However, I suspect the practical impact will be small since many problems have lots of data (so bias and variance are small) and no good way to get additional predictive features (so noise can't be reduced). Response to author rebuttal: I was disappointed to see the authors push back about the use of "discrimination" to describe differences in cost functions. "However, in the NFL example, if differences in error rates were related to race, there might be cause for concern--depending on how the classifier is used. Drawing on terminology from prior work, we use “discrimination” as we consider applications in which decisions based on error-biased classifiers may be considered unfair, such as healthcare, employment and criminal justice—similar to how a human making consistent race-dependent errors could also be considered discriminatory." This is all true, differences in error rates *might* be discriminatory, but they also might not be. This is why we need to be extremely precise with our language. Previous papers have been careless with the word "discriminatory", but this does not mean this (otherwise careful and important) paper should follow their example. The title of the paper does not give me confidence that the discussion in the revised paper about when, exactly, differences in cost functions are discriminatory will be sufficiently nuanced.
NIPS
Title Regret Bounds for Information-Directed Reinforcement Learning Abstract Information-directed sampling (IDS) has revealed its potential as a data-efficient algorithm [Lu et al., 2021] for reinforcement learning (RL). However, theoretical understanding of IDS for Markov Decision Processes (MDPs) is still limited. We develop novel information-theoretic tools to bound the information ratio and cumulative information gain about the learning target. Our theoretical results shed light on the importance of choosing the learning target such that the practitioners can balance the computation and regret bounds. As a consequence, we derive priorfree Bayesian regret bounds for vanilla-IDS which learns the whole environment under tabular finite-horizon MDPs. In addition, we propose a computationallyefficient regularized-IDS that maximizes an additive form rather than the ratio form and show that it enjoys the same regret bound as vanilla-IDS. With the aid of rate-distortion theory, we improve the regret bound by learning a surrogate, less informative environment. Furthermore, we extend our analysis to linear MDPs and prove similar regret bounds for Thompson sampling as a by-product. 1 Introduction Information-directed sampling (IDS) is a design principle proposed by [Russo and Van Roy, 2014, 2018] that optimizes the trade-off between information and regret. Comparing with other design principles such as UCB and Thompson sampling (TS), IDS can automatically adapt to different information-regret structures. As a result, IDS demonstrates impressive empirical performance [Russo and Van Roy, 2018] and outperforms UCB and TS in terms of asymptotic optimality [Kirschner et al., 2021] and minimax optimality in heteroscedastic bandits [Kirschner and Krause, 2018] and sparse linear bandits [Hao et al., 2021]. In the context of full RL, mutiple works have examined the empirical performance of IDS [Nikolov et al., 2018, Lu et al., 2021]. However, formal regret guarantee for IDS is still lacking. IDS minimizes a notion of information ratio that is the ratio of per-episode regret and information gain about the learning target. While different choices of the learning target could lead to different regret bounds and computational methods, the most natural choice is the whole environment and we name the corresponding IDS as vanilla-IDS. In this work, we prove the first prior-free Õ( √ S3A2H4L) Bayesian regret bound for vanilla-IDS, where S is the size of state space, A is the size of action space, H is the length of episodes and L is the number of episodes. Computationally, vanilla-IDS needs to optimize over the full policy space, which is not efficient in general. To facilitate the computation, we consider its regularized form, named regularized-IDS, that can be solved by any dynamic programming solver. By carefully 36th Conference on Neural Information Processing Systems (NeurIPS 2022). choosing the tunable parameter, we prove that regularized-IDS enjoys the same regret bound as vanilla-IDS. Although learning the whole environment offers certain computational advantages, the agent could take too much information to learn the whole environment exactly. A key observation is that different states may correspond to the same value function which eventually determines the behavior of the optimal policy. Through the rate-distortion theory, we construct a surrogate environment that is less informative to learn but enough to identify the optimal policy. As a result, we propose surrogate-IDS that takes the surrogate environment as the learning target and prove a sharper Õ( √ S2A2H4L) bound for tabular MDPs. In the end, we extend our analysis to linear MDPs where we must learn a surrogate environment due to potentially infinitely many states and derive a Õ(dH2 √ T ) Bayesian regret bound that matches the existing minimax lower bound up to a factor of H . As a by-product of our analysis, we also prove prior-free Bayesian regret bounds for TS under tabular and linear MDPs. 2 Related work In general, there are two ways to prove Bayesian regret bounds. The first is to introduce confidence sets such that the Bayesian regret bounds of TS can match the best possible frequentist regret bounds by UCB [Russo and Van Roy, 2014] and has been extended to RL by Osband et al. [2013], Osband and Van Roy [2014], Osband et al. [2019]. However, when the best possible bound for UCB is sub-optimal (for instance, sparse linear bandits [Hao et al., 2021]), this technique yields a sub-optimal Bayesian regret bound. In addition, this technique can only be used to analyze TS but not IDS. The second is to decompose the Bayesian regret into an information ratio term and a cumulative information gain term and bound them by tools from information theory [Russo and Van Roy, 2016]. This technique can be used to analyze both TS [Dong and Van Roy, 2018, Bubeck and Sellke, 2020] and IDS in bandits setting [Russo and Van Roy, 2014, Liu et al., 2018, Kirschner et al., 2020b, Hao et al., 2021, 2022], partial monitoring [Lattimore and Szepesvári, 2019, Kirschner et al., 2020a, Lattimore and Gyorgy, 2021] but not in RL as far as we know. One exception is Lu and Van Roy [2019], Lu [2020] who bounded the information ratio for a specific Dirichlet prior with additional assumptions. Frequentist regret bounds in episodic RL have received considerable attention recently. For tabular MDPs, several representative works include UCBVI [Azar et al., 2017], optimistic Q-learning [Jin et al., 2018], RLSVI [Russo, 2019], UCB-Advantage [Zhang et al., 2020], UCB-MQ [Ménard et al., 2021]. While our regret bounds are not state of the art, the primary goal of this paper is to broaden the set of efficient RL design principles known to satisfy √ T regret bounds. For linear or linear mixture MDPs, several representative works include LSVI-UCB [Jin et al., 2020], OPPO [Cai et al., 2020], UCRL-VTR [Ayoub et al., 2020, Zhou et al., 2021], RLSVI [Zanette et al., 2020]. Notably, Zhang [2021], Dann et al. [2021] derived minimax regret bounds for a variant of TS. Beyond linear cases, several works consider general function approximation based on Bellman rank [Jiang et al., 2017], eluder dimension [Wang et al., 2020], Bellman-eluder dimension [Jin et al., 2021] and bilinear class [Du et al., 2021]. It is worth mentioning the recent impressive work by Foster et al. [2021] who proposed a general Estimation-to-Decisions (E2D) design principle. Although motivated by different design principles, E2D shares the similar form as regularized-IDS. On one hand, Foster et al. [2021] mainly focuses on statistical complexity in a minimax sense, while we offer a specific computationallyefficient algorithm thanks to the chain rule of mutual information and independent priors and derive corresponding Bayesian regret bounds. On the other hand, while E2D tends to learn the whole environment, our theory in Section 5 suggests learning a surrogate environment could yield better regret bounds. 3 Preliminary Finite-horizon MDPs The environment is characterized by a finite-horizon time-inhomogeneous MDP, which is a tuple E = (S,A, H, {Ph}Hh=1, {rh}Hh=1), where S is the countable state space with |S| = S, A is the finite action space with |A| = A, H is the episode length, Ph : S × A → ∆S is the transition probability kernel and rh : S ×A → [0, 1] is the reward function. For a finite set S , let ∆S be the set of probability distributions over S. We assume S, A, rh are known and deterministic while the transition probability kernel is unknown and random. Throughout the paper, we may write Ph and rh explicitly depend on E when necessary. Let Θh = [0, 1]S×A×S be the parameter space of Ph and Θ = Θ1 × · · · ×ΘH be the full parameter space. We assume ρh is the prior probability measure for Ph on Θh with Borel σ-algebra and ρ = ρ1 ⊗ · · · ⊗ ρH as the product prior probability measure for the whole environment on Θ with Borel σ-algebra. This ensures the priors over different layers are independent and the prior is assumed to be known to the learner. Interaction protocol An agent interacts with a finite-horizon MDP as follows. The initial state s`1 is assumed to be fixed over episodes. In each episode ` ∈ [L] and each layer h ∈ [H], the agent observes a state s`h, takes an action a ` h, and receives a reward r ` h. Then, the environment evolves to a random next state s`h+1 according to distribution Ph(·|s`h, a`h). The episode terminates when sH+1 is reached and is reset to the initial state. Denote H`,h as the history of episode ` up to layer h, e.g., H`,h = (s`1, a`1, r`1, . . . , s`h, a`h, r`h) and the set of such possible history is Ωh = ∏h i=1(S × A× [0, 1]) . Let D` = (H1,H , . . . ,H`−1,H) as the entire history up to episode ` with D1 = ∅. A policy π is a collection of (possibly randomised) mappings (π1, . . . , πH) where each πh maps an element from Ωh−1×S to ∆(A) and Π is the whole policy class. A stationary policy chooses actions based on only the current state and current layer. The set of such policies is denoted by ΠS where we denote πh(a|s) as the probability that the agent chooses action a at state s and layer h. Value function For each h ∈ [H] and a policy π, the value function V Eh,π : S → R is defined as the expected value of cumulative rewards received under policy π when starting from an arbitrary state at hth layer; that is, V Eh,π(s) := EEπ [ H∑ h′=h rh′(sh′ , ah′) ∣∣∣∣sh = s ] , where EEπ denotes the expectation over the sample path generated under policy π and environment E . We adapt the convention that V EH+1,π(·) = 0. There always exists an optimal policy π∗ which gives the optimal value V Eh,π∗(s) = maxπ∈ΠS V E h,π(s) for all s ∈ S and h ∈ [H]. Note that in the Bayesian setting, π∗ is a function of E so it is also a random variable. In addition, we define the action-value function as follows: QEh,π(s, a) := EEπ [ H∑ h′=h rh′(sh′ , ah′) ∣∣∣∣sh = s, ah = a ] , which satisfies the Bellman equation: QEh,π(s, a) = rh(s, a) + Es′∼Ph(·|s,a)[V Eh+1,π(s′)]. Furthermore, we denote the state-action occupancy measure as dEh,π(s, a) = PEπ(sh = s, ah = a) , where we denote PEπ as the law of the sample path generated under policy π and environment E . Bayesian regret The agent interacts with the environment for L episodes and the total number of steps is T = LH . The expected cumulative regret of an algorithm π = {π`}L`=1 with respect to an environment E is defined as RL(E , π) = E [ L∑ `=1 ( V E1,π∗(s ` 1)− V E1,π`(s ` 1) )] , where the expectation is taken with respect to the randomness of π`. The Bayesian regret then is defined as BRL(π) = E[RL(E , π)] , where the expectation is taken with respect to the prior distribution of E . At each episode, TS finds π`TS = argmax π∈Π V E`1,π(s ` 1) , where E` is a sample from the posterior distribution of E , e.g., E` ∼ P(E ∈ ·|D`) . Notations Let (Ω,F ,P) as a measurable space. A random variable X is a measureable function X : Ω → E from a set of possible outcomes Ω to a measurable space E. Now P(X ∈ ·) is a probability measure that maps from F to [0, 1]. D` is another random variable from Ω to a measurable space Y . Then P(X ∈ ·|D`) is a probability kernel that maps from Ω×F → [0, 1]. We write P`(·) = P(·|D`), E`[·] = E[·|D`] and also define the conditional mutual information I`(X;Y ) = DKL(P((X,Y ) ∈ ·|D`)||P(X ∈ ·|D`) ⊗ P(Y ∈ ·|D`)). For a random variable χ we define: Iπ` (χ;H`,h) = DKL(P`,π((χ,H`,h) ∈ ·)||P`,π(χ ∈ ·)⊗ P`,π(H`,h ∈ ·)) , where P`,π is the law of χ and the history induced by policy π interacting with a sample from the posterior distribution of E given D`. We define Ē` as the mean MDP where for each state-action pair (s, a), P Ē`h (·|s, a) = E`[P Eh (·|s, a)] is the mean of posterior measure. 4 Learning the whole environment The core design of IDS for RL relies on a notion of information ratio. The information ratio for a policy π at episode ` is defined as Γ`(π, χ) := (E`[V E1,π∗(s`1)− V E1,π(s`1)])2 Iπ` (χ;H`,H) , (4.1) where χ is the learning target to prioritize information sought by the agent. The choice of χ plays a crucial role in designing the IDS and could lead to different regret bounds and computational methods. We first consider the most natural choice of χ which is the whole environment E . 4.1 Vanilla IDS Vanilla-IDS takes the whole environment E as the learning target and at the beginning of each episode, the agent computes a stochastic policy: π`IDS = argmin π∈Π [ Γ`(π) := (E`[V E1,π∗(s`1)− V E1,π(s`1)])2 Iπ` (E ;H`,H) ] . (4.2) Define the worst-case information ratio Γ∗ such that Γ`(π`IDS) ≤ Γ∗ for any ` ∈ [L] almost surely. The next theorem derives a generic regret bound for vanilla-IDS in terms of Γ∗ and the mutual information between E and the history. Theorem 4.1. A generic regret bound for vanilla-IDS is BRL(πIDS) ≤ √ E[Γ∗]I (E ;DL+1)L . The proof is deferred to Appendix A.1 and follows standard information-theoretical regret decomposition and the chain rule of mutual information that originally was exploited by Russo and Van Roy [2014]. For tabular MDPs, it remains to bound the E[Γ∗] and I (E ;DL+1) separately. Lemma 4.2. The worst-case information ratio for tabular MDPs is upper bounded by E[Γ∗] ≤ 2SAH3 . We sketch the main steps of the proof and defer the full proof to Appendix A.2. Proof sketch. Since vanilla-IDS minimizes the information ratio over all the policies, we can bound the information ratio of vanilla-IDS by the information ratio of TS. • Step one. Our regret decomposition uses the value function based on Ē` as a bridge: E` [ V E1,π∗(s ` 1)− V E1,π`TS(s ` 1) ] = E` [ V E1,π∗(s ` 1)− V Ē` 1,π`TS (s`1) ] ︸ ︷︷ ︸ I1 +E` [ V Ē` 1,π`TS (s`1)− V E1,π`TS(s ` 1) ] ︸ ︷︷ ︸ I2 . Note that conditional on D`, the law of π`TS is the same as the law of π∗ and both π∗ and π`TS are independent of Ē`. This implies E`[V Ē` 1,π`TS (s`1)] = E`[V Ē` 1,π∗(s ` 1)]. • Step two. Denote ∆Eh(s, a) = Es′∼PEh (·|s,a)[V E h+1,π∗(s ′)] − Es′∼P Ēh (·|s,a)[V E h+1,π∗(s ′)] as the value function difference. Inspired by Foster et al. [2021], with the use of state-action occupancy measure and Lemma D.3, we can derive I1 = H∑ h=1 E` ∑ (s,a) dĒ`h,π∗(s, a) (E`[dĒ`h,π∗(s, a)])1/2 (E`[dĒ`h,π∗(s, a)]) 1/2∆Eh(s, a) . Applying the Cauchy–Schwarz inequality and Pinsker’s inequality (see Eqs. (A.2)-(A.4) in the appendix for details), we can obtain I1 ≤ √ SAH3 ( H∑ h=1 E` [ EĒ` π`TS [ 1 2 DKL ( P Eh (·|s`h, a`h)||P Ē` h (·|s ` h, a ` h) )]])1/2 , where we interchange π`TS and π ∗ again and EĒ` π`TS is taken with respect to s`h, a ` h and E` is taken with respect to π`TS and E . • Step three. It remains to establish the following equivalence of above KL-divergence and the information gain (Lemma A.1): H∑ h=1 E` [ EĒ` π`TS [ DKL ( P Eh (·|sh, ah)||P Ē` h (·|sh, ah) )]] = Iπ ` TS ` (E ;H`,H) . A crucial step is to use the linearity of the expectation and the independence of priors over different layers (from the product prior as we assumed in Section 3) to show P`,π`TS(sh−1 = s, ah−1 = a) = P Ē` π`TS (sh−1 = s, ah−1 = a) . Combining Steps 1-3, we can reach the conclusion and the bound for I2 is similar. The next lemma directly bounds the mutual information for tabular MDPs. Lemma 4.3. The mutual information can be bounded by I(E ;DL+1) ≤ 2S2AH log (SLH) . The proof relies on the construction of Bayes mixture density and a covering set for KL-divergence and is deferred to Appendix A.3. Combining Theorem 4.1, Lemmas 4.2 and 4.3 yields the following: Theorem 4.4 (Regret bound for tabular MDPs). Suppose πIDS = {π`IDS}L`=1 is the vanilla IDS policy. The following Bayesian regret bound holds for tabular MDPs BRL(πIDS) ≤ √ 8S3A2H4L log(SLH) . Although this regret bound is sub-optimal, this is the first sub-linear prior-free Bayesian regret bound for vanilla-IDS. Remark 4.5. It is worth mentioning that Lu and Van Roy [2019], Lu [2020] also derived Bayesian regret bound using information-theoretical tools but only hold for a specific Dirichlet prior as well other distribution-specific assumptions. Their proof heavily exploits the property of Dirichlet distribution and can not easily be extended to prior-free regret bounds. In the context of finite-horizon MDPs, Lu et al. [2021] considered a conditional-IDS such that at each time step, conditional on s`h, conditional-IDS takes the action according to πh(·|s`h) = argmin ν∈∆A ( E` [ V Eh,π∗(s ` h)−QEh,π∗(s`h, Ah) ])2 I` ( χ; (Ah, QEh,π∗(s ` h, Ah)) ) , where Ah is sampled from ν. Conditional-IDS defined the information ratio per-step rather than per-episode such that it only needs to optimize over action space rather than the policy space. This offers great computational benefits but there is no regret guarantee for conditional-IDS. Recently, Hao et al. [2022] has demonstrated the theoretical limitation of conditional-IDS in contextual bandits. 4.2 Regularized IDS Computing an IDS policy practically usually involves two steps: 1. approximating the information ratio; 2. optimizing the information ratio. In bandits where the optimal policy is only a function of action space, optimizing Eq. (4.2) is a convex optimization problem and has an optimal solution with at most two non-zero components (Russo and Van Roy [2018, Proposition 6]). However in MDPs where the optimal policy is a mapping from the state space to the action space, vanilla-IDS needs to traverse two non-zero components over the full policy space which suggests the computational time might grow exponentially in S and H . To overcome this obstacle, we propose regularized-IDS that can be efficiently computed by any dynamic programming solver and enjoy the same regret bound as vanilla-IDS. At each episode `, regularized-IDS finds the policy: π`r-IDS = argmax π∈Π E`[V E1,π(s`1)] + λI` ( E ;Hπ`,H ) , (4.3) where λ > 0 is a tunable parameter. To approximate the objective function in Eq. (4.3), we assume the access to a posterior sampling oracle. Definition 4.6 (Posterior sampling oracle). Given a prior over E and history D`, the posterior sampling oracle, SAMP, is a subroutine which returns a sample from the posterior distribution P`(E). Multiple calls to the procedure result in independent samples. Remark 4.7. SAMP can be exactly obtained when the conjugate prior such as Dirichlet distribution is put on the transition kernel. When one uses neural nets to estimate the model, SAMP can be approximated by epistemic neural networks [Osband et al., 2021a], a general framework to quantify uncertainty for neural nets. The effectiveness of different epistemic neural networks such as deep ensemble, dropout and stochastic gradient MCMC has been examined empirically by Osband et al. [2021b]. We compute π`r-IDS in two steps: • Firstly, we prove an equivalent form of the objective function in Eq. (4.3) using the chain rule of mutual information. Define r′h(s, a) as an augmented reward function: r′h(s, a) = rh(s, a) + λ ∫ DKL ( P Eh (·|s, a)||P Ē` h (·|s, a) ) dP`(E) . Proposition 4.8. The following equivalence holds E`[V E1,π(s`1)] + λIπ` (E ;H`,H) = EĒ`π [ H∑ h=1 r′h(sh, ah) ] . The proof is deferred to Appendix A.4. • Given SAMP, the augmented reward r′h and the MDP Ē` can be well approximated by Monte Carlo sampling. Therefore, at each episode `, finding π`r-IDS is equivalent to find an optimal policy based on a computable and augmented MDP {P Ē`h , r′h}Hh=1. This can be solved efficiently by any dynamic programming solver such as value iteration or policy iteration. In the end, we show that π`r-IDS enjoys the same regret bound as vanilla-IDS when the tunable parameter is carefully chosen. Theorem 4.9. By choosing λ = √ LE[Γ∗]/I(E ;DL+1), we have BRL(π r-IDS) ≤ √ 3 2 LE[Γ∗]I(E ;DL+1) . The proof is deferred to Appendix A.5. Let M1,M2 be upper bounds of E[Γ∗] and I(E ;DL+1) respectively. In practice, we could conservatively choose λ = √ LM1/M2 such that BRL(πr-IDS) ≤√ 3/2M1M2L. From Lemmas 4.2 and 4.3 for tabular MDPs, we could choose M1 = 2SAH3 and M2 = 2S 2AH log(SLH). Remark 4.10. Russo and Van Roy [2018, Section 9.3] also considered a tunable version of IDS (for bandits) but took a square form of E`[V E1,π(s`1)]. While this makes no difference in bandits setting, this prevented us to use dynamic programming solver in RL setting. We are also inspired by Foster et al. [2021, Section 9.3] who studied the relationship between information ratio and Decision-Estimation Coefficient. 5 Learning a surrogate environment When the state space is large, the agent could take too much information to learn exactly the whole environment E which is reflected through I(E ;DL+1). A key observation is that different states may correspond to the same value function who eventually determines the behavior of the optimal policy. Based on the rate-distortion theory developed in Dong and Van Roy [2018], we reduce this redundancy and construct a surrogate environment that needs less information to learn. 5.1 A rate distortion approach The rate-distortion theory [Cover and Thomas, 1991] addresses the problem of determining the minimal number of bits per symbol that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding an expected distortion. It was recently introduced to bandits community to develop sharper bounds for linear bandits [Dong and Van Roy, 2018] and time-sensitive bandits [Russo and Van Roy, 2022]. We take a similar approach to construct a surrogate environment. Surrogate environment Suppose there exists a partition {Θk}Kk=1 over Θ such that for any E , E ′ ∈ Θk and any k ∈ [K], we have V E1,π∗E (s ` 1)− V E ′ 1,π∗E (s`1) ≤ ε , (5.1) where ε > 0 is the distortion tolerance and we write the optimal policy explicitly depending on the environment. Let ζ be a discrete random variable taking values in {1, . . . ,K} that indicates the region E lies such that ζ = k if and only if E ∈ Θk. Therefore, ζ can be viewed as a statistic of E and less informative than E if K is small. The next lemma shows the existence of the surrogate environment based on the partition. Lemma 5.1. For any partition {Θk}Kk=1 and any ` ∈ [L], we can construct a surrogate environment Ẽ∗` ∈ Θ which is a random MDP such that the law of Ẽ∗` only depends on ζ and E` [ V E1,π∗E (s ` 1)− V E1,π`TS(s ` 1) ] − E` [ V Ẽ∗` 1,π∗E (s`1)− V Ẽ∗` 1,π`TS (s`1) ] ≤ ε . (5.2) The concrete form of Ẽ∗` is deferred to Eq. (B.1) in the appendix. Surrogate IDS We refer the IDS based on the surrogate environment Ẽ∗` as surrogate-IDS that minimizes π`s-IDS = argmin π∈Π (E`[V E1,π∗(s`1)− V E1,π(s`1)]− ε)2 Iπ` (Ẽ∗` ;H`,H) , (5.3) for some parameters ε > 0 the will be chosen later. Denote the surrogate information ratio of TS as Γ̃ = max `∈[L] ( E` [ V Ẽ∗` 1,π∗(s ` 1)− V Ẽ∗` 1,π`TS (s`1) ])2 Iπ ` TS ` (Ẽ∗` ;H`,H) . We first derive a generic regret bound for surrogate IDS in the following theorem. Theorem 5.2. A generic regret bound for surrogate IDS is BRL(πs-IDS) ≤ √ E[Γ̃]I(ζ;DL+1)L+ Lε . We defer the proof to Appendix B.2. Given ζ, we have Ẽ∗` andH`,H are independent under the law of P`,π`s-IDS . By the data processing inequality, the proof uses the fact that Iπ ` s-IDS ` (Ẽ ∗ ` ;H`,H) ≤ I π`s-IDS ` (ζ;H`,H) . Comparing with regret bound of vanilla-IDS in Lemma 4.1, the regret bound of surrogate-IDS depends on the information gain about ζ rather than the whole environment E . If there exists a partition with small covering number K, the agent could pay less information to learn. The second term Lε is the price of distortions. In the following, we will bound the E[Γ̃] and I (ζ;DL+1) for tabular and linear MDPs separately. 5.2 Tabular MDPs We first show the existence of the partition required in Lemma 5.1 for tabular MDPs and an upper bound of the covering number K. Lemma 5.3. There exists a partition {Θεk}Kk=1 over Θ such that for any k ∈ [K] and E1, E2 ∈ Θεk, V E11,π∗E1 (s1)− V E21,π∗E1 (s1) ≤ ε , and the log covering number satisfies log(K) ≤ SAH log(4H2/ε). The proof is deferred to Lemma B.3. For tabular MDPs, the mutual information between ζ and the history can be bounded by I(ζ;DL+1) ≤ H(ζ) ≤ log(K) ≤ SAH log(4H2/ε) , where H(·) is the Shannon entropy. Comparing with Lemma 4.3 when learning the whole environment, learning the surrogate environment saves a factor of S through the bound of mutual information. Lemma 5.4. The surrogate information ratio for tabular MDPs is upper bounded by E[Γ̃] ≤ 2SAH3 . The proof is the same as Lemma 4.2 and thus is omitted. Putting Lemmas 5.3-5.4 yields an improved bound for tabular MDPs using surrogate-IDS. Theorem 5.5 (Improved regret bound for tabular MDPs). By choosing ε = 1/L, the regret bound of surrogate-IDS for tabular MDPs satisfies BRL(πs-IDS) ≤ √ 2S2A2H4L log(4HL) . For tabular MDPs, surrogate-IDS improves the regret bound of vanilla-IDS by a factor of S. However, it is still away from the minimax lower bound by a factor of √ SAH . We conjecture surrogate-IDS can achieve the optimal bound with a price of lower order term but leave it as a future work. Remark 5.6. Although the existence of Ẽ∗` is established using a constructive argument, finding Ẽ∗` needs a grid search and is not computationally efficient. 5.3 Linear MDPs We extend our analysis to linear MDPs that is a fundamental model to study the theoretical properties of linear function approximations in RL. All the proofs are deferred to Appendix B.4-B.5. Definition 5.7 (Linear MDPs [Yang and Wang, 2019, Jin et al., 2020]). Let φ : S × A → Rd be a feature map which assigns to each state-action pair a d-dimensional feature vector and assume ‖φ(s, a)‖2 ≤ 1. An MDP is called a linear MDP if for any h ∈ [H], there exist d unknown (signed) measures ψ1h, . . . , ψ d h over S , such that for any (s, a) ∈ S ×A, we have Ph(·|s, a) = 〈φ(s, a), ψh(·)〉 , where ψh = (ψ1h, . . . , ψ d h). Let us denote Θ Lin be the parameter space of linear MDPs and assume ‖ ∑ s′ ψh(s ′)‖ 2 ≤ Cψ . Note that the degree of freedom of linear MDPs still depends on S which implies that I(E ;DL+1) may still scale with S. Therefore, we must learn a surrogate environment rather than the whole environment for linear MDPs based on the current regret decomposition in Theorem 4.4. We first show the existence of a partition over linear MDPs with the log covering number only depending on the feature dimension d. Lemma 5.8. There exists a partition {Θεk}Kk=1 over ΘLin such that for any k ∈ [K] and E1, E2 ∈ Θk, V E11,π∗E1 (s1)− V E21,π∗E1 (s1) ≤ ε , and the log covering number satisfies log(K) ≤ Hd log(H2Cψ/ε+ 1). For linear MDPs, the mutual information can be bounded by I(ζ;DL+1) ≤ H(ζ) ≤ log(K) ≤ Hd log(H2Cψ/ε+ 1) . Lemma 5.9. The surrogate information ratio of linear MDPs is upper bounded by E[Γ̃] ≤ 4H3d . Theorem 5.10 (Regret bound for linear MDPs). By choosing ε = 1/L, the regret bound of surrogate IDS for linear MDPs satisfies BRL(πs-IDS) ≤ √ 4H4d2L log(H2CψL+ 1) + 1 . This Bayesian bound improves the O(d3/2H2 √ L) frequentist regret of LSVI-UCB [Jin et al., 2020] by a factor of √ d and matches the existing minimax lower bound O( √ H3d2L) [Zhou et al., 2021] up to a H factor. However, we would like to emphasize that this is not an apples-to-apples comparison, since in general frequentist regret bound is stronger than Bayesian regret bound. 5.4 Regret bounds for TS As a direct application of our rate-distortion analysis, we provide Bayesian regret bounds for Thompson sampling. Theorem 5.11. A generic regret bound for TS is BRL(πTS) ≤ √ E[Γ̃]I(ζ;DL+1)L+ Lε . This implies for tabular and linear MDPs, TS has the same regret bound as surrogate-IDS. Note that the computation of TS does not need to involve the surrogate environment Ẽ∗` so once the posterior sampling oracle is available, computing the policy is efficient. Howevern when the worstcase information ratio cannot be optimally bounded by the information ratio of TS, IDS demonstrates better regret bounds than TS, such as bandits with graph feedback [Hao et al., 2022] and sparse linear bandits [Hao et al., 2021]. 6 Conclusion In this paper, we derive the first prior-free Bayesian regret bounds for information-directed RL under tabular and linear MDPs. Theoretically, it will be of great interest to see if any version of IDS can achieve the O( √ SAH3L) minimax lower bounds for tabular MDPs. Acknowledgements We would like to thank Johannes Kirschner for helpful initial discussions.
1. What is the focus and contribution of the paper regarding learning in episodic MDPs? 2. What are the strengths of the proposed IDS algorithms, particularly in their ability to adapt to different settings? 3. Do you have any concerns or questions regarding the definitions and notation used in the paper? 4. How does the paper's analysis relate to other works in the field, such as Foster et al. 2021? 5. What are the limitations of the proposed algorithms, and how do they compare to other approaches in terms of computational efficiency?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper provides the first information-directed sampling (IDS) algorithm with theoretical guarantees for the learning in episodic MDP in the prior free setting. The assumptions are the following: the reward function is deterministic and known to the learner and the transition probabilities in the MDP are unknown and are sampled from a known prior distribution before the first episode begins. Authors considers two types of setups: all presented algorithms works in a tabular setting and some results also extended to work in the linear MDPs. The performance of the learner is measured by a Bayesian regret. Strengths And Weaknesses This works adapts the idea of IDS to the setting of learning in the MDP and the authors present three algorithms to tackle this problem. For the first algorithm proposed, Vanilla IDS, the idea is to introduce a notion of the “environment”, which hides in it all the randomness of the unknown parameters of MDP’s transitions and define the information ratio for a policy \pi as the ratio between the square of expected difference of the value function of optimal policy and the value function of policy \pi, divided by the information gain of the “environment” variable and a the history of episode ℓ up to layer h produced by a policy \pi, all conditioned on the history. To find a \pi, that achieves the minimum information ratio, the learned has to optimize over the full policy space, which is a computationally costly. The analysis is simple and borrows the tricks from the literature, as the decomposition of regret based on the marginal posterior distribution of “environment” (line 145) and a trick with the ratio of occupancy measures (Lemma D.3), but all together it gives the first regret bound of this kind. Next, authors propose Regularized-IDS algorithm, where instead of computing the ratio, authors propose to compute the sum of the arguments of Vanilla IDS. The result of this chapter is that the Regularized-IDS can be efficiently computed using the samples from the posterior which gives the augmented MDP and has the same regret bound as Vanilla IDS. Finally, the author improve the regret bound of Regularized-IDS and Vanilla IDS, which they show can be achievable by Surrogate-IDS algorithm. The idea of this algorithm is to construct a surrogate environment, which would be an \epsilon approximation of the true “environment” variable and then compute the information ratio which would be computed over this approximated environment. This algorithm is not computationally efficient, but it improves the dependence of the regret bound on S. Also, the discretisation approach allows to extend the results obtained for episodic MDP to linear MDP, as the number of the set in the partition of the environment space does grow as the covering number of the bounded set in R^d. I find it especially interesting how similar techniques works in the analysis of this paper and [Foster et.al 2021], since it give another evidence that the decision-estimated coefficient is related to the information ratio. Questions In some places the definition of variables are omitted, please check it. Definition of \bar{\Epsilon}_l is recursive zeta in the proof of B.1 is undefined the way of defining \pi_{TS}^l is confusing as this policy is only used in the proof and no presented algorithms use it to compute the actual policy. Minor remarks for the clarity, it has to be mentioned in the preliminaries that the prior is assumed to be known to the learner Partition should depend on \epsilon In equation (3.3), \pi is missing from I_{\ell} Limitations The main limitation of the proposed algorithms is that they are not computationally efficient.
NIPS
Title Regret Bounds for Information-Directed Reinforcement Learning Abstract Information-directed sampling (IDS) has revealed its potential as a data-efficient algorithm [Lu et al., 2021] for reinforcement learning (RL). However, theoretical understanding of IDS for Markov Decision Processes (MDPs) is still limited. We develop novel information-theoretic tools to bound the information ratio and cumulative information gain about the learning target. Our theoretical results shed light on the importance of choosing the learning target such that the practitioners can balance the computation and regret bounds. As a consequence, we derive priorfree Bayesian regret bounds for vanilla-IDS which learns the whole environment under tabular finite-horizon MDPs. In addition, we propose a computationallyefficient regularized-IDS that maximizes an additive form rather than the ratio form and show that it enjoys the same regret bound as vanilla-IDS. With the aid of rate-distortion theory, we improve the regret bound by learning a surrogate, less informative environment. Furthermore, we extend our analysis to linear MDPs and prove similar regret bounds for Thompson sampling as a by-product. 1 Introduction Information-directed sampling (IDS) is a design principle proposed by [Russo and Van Roy, 2014, 2018] that optimizes the trade-off between information and regret. Comparing with other design principles such as UCB and Thompson sampling (TS), IDS can automatically adapt to different information-regret structures. As a result, IDS demonstrates impressive empirical performance [Russo and Van Roy, 2018] and outperforms UCB and TS in terms of asymptotic optimality [Kirschner et al., 2021] and minimax optimality in heteroscedastic bandits [Kirschner and Krause, 2018] and sparse linear bandits [Hao et al., 2021]. In the context of full RL, mutiple works have examined the empirical performance of IDS [Nikolov et al., 2018, Lu et al., 2021]. However, formal regret guarantee for IDS is still lacking. IDS minimizes a notion of information ratio that is the ratio of per-episode regret and information gain about the learning target. While different choices of the learning target could lead to different regret bounds and computational methods, the most natural choice is the whole environment and we name the corresponding IDS as vanilla-IDS. In this work, we prove the first prior-free Õ( √ S3A2H4L) Bayesian regret bound for vanilla-IDS, where S is the size of state space, A is the size of action space, H is the length of episodes and L is the number of episodes. Computationally, vanilla-IDS needs to optimize over the full policy space, which is not efficient in general. To facilitate the computation, we consider its regularized form, named regularized-IDS, that can be solved by any dynamic programming solver. By carefully 36th Conference on Neural Information Processing Systems (NeurIPS 2022). choosing the tunable parameter, we prove that regularized-IDS enjoys the same regret bound as vanilla-IDS. Although learning the whole environment offers certain computational advantages, the agent could take too much information to learn the whole environment exactly. A key observation is that different states may correspond to the same value function which eventually determines the behavior of the optimal policy. Through the rate-distortion theory, we construct a surrogate environment that is less informative to learn but enough to identify the optimal policy. As a result, we propose surrogate-IDS that takes the surrogate environment as the learning target and prove a sharper Õ( √ S2A2H4L) bound for tabular MDPs. In the end, we extend our analysis to linear MDPs where we must learn a surrogate environment due to potentially infinitely many states and derive a Õ(dH2 √ T ) Bayesian regret bound that matches the existing minimax lower bound up to a factor of H . As a by-product of our analysis, we also prove prior-free Bayesian regret bounds for TS under tabular and linear MDPs. 2 Related work In general, there are two ways to prove Bayesian regret bounds. The first is to introduce confidence sets such that the Bayesian regret bounds of TS can match the best possible frequentist regret bounds by UCB [Russo and Van Roy, 2014] and has been extended to RL by Osband et al. [2013], Osband and Van Roy [2014], Osband et al. [2019]. However, when the best possible bound for UCB is sub-optimal (for instance, sparse linear bandits [Hao et al., 2021]), this technique yields a sub-optimal Bayesian regret bound. In addition, this technique can only be used to analyze TS but not IDS. The second is to decompose the Bayesian regret into an information ratio term and a cumulative information gain term and bound them by tools from information theory [Russo and Van Roy, 2016]. This technique can be used to analyze both TS [Dong and Van Roy, 2018, Bubeck and Sellke, 2020] and IDS in bandits setting [Russo and Van Roy, 2014, Liu et al., 2018, Kirschner et al., 2020b, Hao et al., 2021, 2022], partial monitoring [Lattimore and Szepesvári, 2019, Kirschner et al., 2020a, Lattimore and Gyorgy, 2021] but not in RL as far as we know. One exception is Lu and Van Roy [2019], Lu [2020] who bounded the information ratio for a specific Dirichlet prior with additional assumptions. Frequentist regret bounds in episodic RL have received considerable attention recently. For tabular MDPs, several representative works include UCBVI [Azar et al., 2017], optimistic Q-learning [Jin et al., 2018], RLSVI [Russo, 2019], UCB-Advantage [Zhang et al., 2020], UCB-MQ [Ménard et al., 2021]. While our regret bounds are not state of the art, the primary goal of this paper is to broaden the set of efficient RL design principles known to satisfy √ T regret bounds. For linear or linear mixture MDPs, several representative works include LSVI-UCB [Jin et al., 2020], OPPO [Cai et al., 2020], UCRL-VTR [Ayoub et al., 2020, Zhou et al., 2021], RLSVI [Zanette et al., 2020]. Notably, Zhang [2021], Dann et al. [2021] derived minimax regret bounds for a variant of TS. Beyond linear cases, several works consider general function approximation based on Bellman rank [Jiang et al., 2017], eluder dimension [Wang et al., 2020], Bellman-eluder dimension [Jin et al., 2021] and bilinear class [Du et al., 2021]. It is worth mentioning the recent impressive work by Foster et al. [2021] who proposed a general Estimation-to-Decisions (E2D) design principle. Although motivated by different design principles, E2D shares the similar form as regularized-IDS. On one hand, Foster et al. [2021] mainly focuses on statistical complexity in a minimax sense, while we offer a specific computationallyefficient algorithm thanks to the chain rule of mutual information and independent priors and derive corresponding Bayesian regret bounds. On the other hand, while E2D tends to learn the whole environment, our theory in Section 5 suggests learning a surrogate environment could yield better regret bounds. 3 Preliminary Finite-horizon MDPs The environment is characterized by a finite-horizon time-inhomogeneous MDP, which is a tuple E = (S,A, H, {Ph}Hh=1, {rh}Hh=1), where S is the countable state space with |S| = S, A is the finite action space with |A| = A, H is the episode length, Ph : S × A → ∆S is the transition probability kernel and rh : S ×A → [0, 1] is the reward function. For a finite set S , let ∆S be the set of probability distributions over S. We assume S, A, rh are known and deterministic while the transition probability kernel is unknown and random. Throughout the paper, we may write Ph and rh explicitly depend on E when necessary. Let Θh = [0, 1]S×A×S be the parameter space of Ph and Θ = Θ1 × · · · ×ΘH be the full parameter space. We assume ρh is the prior probability measure for Ph on Θh with Borel σ-algebra and ρ = ρ1 ⊗ · · · ⊗ ρH as the product prior probability measure for the whole environment on Θ with Borel σ-algebra. This ensures the priors over different layers are independent and the prior is assumed to be known to the learner. Interaction protocol An agent interacts with a finite-horizon MDP as follows. The initial state s`1 is assumed to be fixed over episodes. In each episode ` ∈ [L] and each layer h ∈ [H], the agent observes a state s`h, takes an action a ` h, and receives a reward r ` h. Then, the environment evolves to a random next state s`h+1 according to distribution Ph(·|s`h, a`h). The episode terminates when sH+1 is reached and is reset to the initial state. Denote H`,h as the history of episode ` up to layer h, e.g., H`,h = (s`1, a`1, r`1, . . . , s`h, a`h, r`h) and the set of such possible history is Ωh = ∏h i=1(S × A× [0, 1]) . Let D` = (H1,H , . . . ,H`−1,H) as the entire history up to episode ` with D1 = ∅. A policy π is a collection of (possibly randomised) mappings (π1, . . . , πH) where each πh maps an element from Ωh−1×S to ∆(A) and Π is the whole policy class. A stationary policy chooses actions based on only the current state and current layer. The set of such policies is denoted by ΠS where we denote πh(a|s) as the probability that the agent chooses action a at state s and layer h. Value function For each h ∈ [H] and a policy π, the value function V Eh,π : S → R is defined as the expected value of cumulative rewards received under policy π when starting from an arbitrary state at hth layer; that is, V Eh,π(s) := EEπ [ H∑ h′=h rh′(sh′ , ah′) ∣∣∣∣sh = s ] , where EEπ denotes the expectation over the sample path generated under policy π and environment E . We adapt the convention that V EH+1,π(·) = 0. There always exists an optimal policy π∗ which gives the optimal value V Eh,π∗(s) = maxπ∈ΠS V E h,π(s) for all s ∈ S and h ∈ [H]. Note that in the Bayesian setting, π∗ is a function of E so it is also a random variable. In addition, we define the action-value function as follows: QEh,π(s, a) := EEπ [ H∑ h′=h rh′(sh′ , ah′) ∣∣∣∣sh = s, ah = a ] , which satisfies the Bellman equation: QEh,π(s, a) = rh(s, a) + Es′∼Ph(·|s,a)[V Eh+1,π(s′)]. Furthermore, we denote the state-action occupancy measure as dEh,π(s, a) = PEπ(sh = s, ah = a) , where we denote PEπ as the law of the sample path generated under policy π and environment E . Bayesian regret The agent interacts with the environment for L episodes and the total number of steps is T = LH . The expected cumulative regret of an algorithm π = {π`}L`=1 with respect to an environment E is defined as RL(E , π) = E [ L∑ `=1 ( V E1,π∗(s ` 1)− V E1,π`(s ` 1) )] , where the expectation is taken with respect to the randomness of π`. The Bayesian regret then is defined as BRL(π) = E[RL(E , π)] , where the expectation is taken with respect to the prior distribution of E . At each episode, TS finds π`TS = argmax π∈Π V E`1,π(s ` 1) , where E` is a sample from the posterior distribution of E , e.g., E` ∼ P(E ∈ ·|D`) . Notations Let (Ω,F ,P) as a measurable space. A random variable X is a measureable function X : Ω → E from a set of possible outcomes Ω to a measurable space E. Now P(X ∈ ·) is a probability measure that maps from F to [0, 1]. D` is another random variable from Ω to a measurable space Y . Then P(X ∈ ·|D`) is a probability kernel that maps from Ω×F → [0, 1]. We write P`(·) = P(·|D`), E`[·] = E[·|D`] and also define the conditional mutual information I`(X;Y ) = DKL(P((X,Y ) ∈ ·|D`)||P(X ∈ ·|D`) ⊗ P(Y ∈ ·|D`)). For a random variable χ we define: Iπ` (χ;H`,h) = DKL(P`,π((χ,H`,h) ∈ ·)||P`,π(χ ∈ ·)⊗ P`,π(H`,h ∈ ·)) , where P`,π is the law of χ and the history induced by policy π interacting with a sample from the posterior distribution of E given D`. We define Ē` as the mean MDP where for each state-action pair (s, a), P Ē`h (·|s, a) = E`[P Eh (·|s, a)] is the mean of posterior measure. 4 Learning the whole environment The core design of IDS for RL relies on a notion of information ratio. The information ratio for a policy π at episode ` is defined as Γ`(π, χ) := (E`[V E1,π∗(s`1)− V E1,π(s`1)])2 Iπ` (χ;H`,H) , (4.1) where χ is the learning target to prioritize information sought by the agent. The choice of χ plays a crucial role in designing the IDS and could lead to different regret bounds and computational methods. We first consider the most natural choice of χ which is the whole environment E . 4.1 Vanilla IDS Vanilla-IDS takes the whole environment E as the learning target and at the beginning of each episode, the agent computes a stochastic policy: π`IDS = argmin π∈Π [ Γ`(π) := (E`[V E1,π∗(s`1)− V E1,π(s`1)])2 Iπ` (E ;H`,H) ] . (4.2) Define the worst-case information ratio Γ∗ such that Γ`(π`IDS) ≤ Γ∗ for any ` ∈ [L] almost surely. The next theorem derives a generic regret bound for vanilla-IDS in terms of Γ∗ and the mutual information between E and the history. Theorem 4.1. A generic regret bound for vanilla-IDS is BRL(πIDS) ≤ √ E[Γ∗]I (E ;DL+1)L . The proof is deferred to Appendix A.1 and follows standard information-theoretical regret decomposition and the chain rule of mutual information that originally was exploited by Russo and Van Roy [2014]. For tabular MDPs, it remains to bound the E[Γ∗] and I (E ;DL+1) separately. Lemma 4.2. The worst-case information ratio for tabular MDPs is upper bounded by E[Γ∗] ≤ 2SAH3 . We sketch the main steps of the proof and defer the full proof to Appendix A.2. Proof sketch. Since vanilla-IDS minimizes the information ratio over all the policies, we can bound the information ratio of vanilla-IDS by the information ratio of TS. • Step one. Our regret decomposition uses the value function based on Ē` as a bridge: E` [ V E1,π∗(s ` 1)− V E1,π`TS(s ` 1) ] = E` [ V E1,π∗(s ` 1)− V Ē` 1,π`TS (s`1) ] ︸ ︷︷ ︸ I1 +E` [ V Ē` 1,π`TS (s`1)− V E1,π`TS(s ` 1) ] ︸ ︷︷ ︸ I2 . Note that conditional on D`, the law of π`TS is the same as the law of π∗ and both π∗ and π`TS are independent of Ē`. This implies E`[V Ē` 1,π`TS (s`1)] = E`[V Ē` 1,π∗(s ` 1)]. • Step two. Denote ∆Eh(s, a) = Es′∼PEh (·|s,a)[V E h+1,π∗(s ′)] − Es′∼P Ēh (·|s,a)[V E h+1,π∗(s ′)] as the value function difference. Inspired by Foster et al. [2021], with the use of state-action occupancy measure and Lemma D.3, we can derive I1 = H∑ h=1 E` ∑ (s,a) dĒ`h,π∗(s, a) (E`[dĒ`h,π∗(s, a)])1/2 (E`[dĒ`h,π∗(s, a)]) 1/2∆Eh(s, a) . Applying the Cauchy–Schwarz inequality and Pinsker’s inequality (see Eqs. (A.2)-(A.4) in the appendix for details), we can obtain I1 ≤ √ SAH3 ( H∑ h=1 E` [ EĒ` π`TS [ 1 2 DKL ( P Eh (·|s`h, a`h)||P Ē` h (·|s ` h, a ` h) )]])1/2 , where we interchange π`TS and π ∗ again and EĒ` π`TS is taken with respect to s`h, a ` h and E` is taken with respect to π`TS and E . • Step three. It remains to establish the following equivalence of above KL-divergence and the information gain (Lemma A.1): H∑ h=1 E` [ EĒ` π`TS [ DKL ( P Eh (·|sh, ah)||P Ē` h (·|sh, ah) )]] = Iπ ` TS ` (E ;H`,H) . A crucial step is to use the linearity of the expectation and the independence of priors over different layers (from the product prior as we assumed in Section 3) to show P`,π`TS(sh−1 = s, ah−1 = a) = P Ē` π`TS (sh−1 = s, ah−1 = a) . Combining Steps 1-3, we can reach the conclusion and the bound for I2 is similar. The next lemma directly bounds the mutual information for tabular MDPs. Lemma 4.3. The mutual information can be bounded by I(E ;DL+1) ≤ 2S2AH log (SLH) . The proof relies on the construction of Bayes mixture density and a covering set for KL-divergence and is deferred to Appendix A.3. Combining Theorem 4.1, Lemmas 4.2 and 4.3 yields the following: Theorem 4.4 (Regret bound for tabular MDPs). Suppose πIDS = {π`IDS}L`=1 is the vanilla IDS policy. The following Bayesian regret bound holds for tabular MDPs BRL(πIDS) ≤ √ 8S3A2H4L log(SLH) . Although this regret bound is sub-optimal, this is the first sub-linear prior-free Bayesian regret bound for vanilla-IDS. Remark 4.5. It is worth mentioning that Lu and Van Roy [2019], Lu [2020] also derived Bayesian regret bound using information-theoretical tools but only hold for a specific Dirichlet prior as well other distribution-specific assumptions. Their proof heavily exploits the property of Dirichlet distribution and can not easily be extended to prior-free regret bounds. In the context of finite-horizon MDPs, Lu et al. [2021] considered a conditional-IDS such that at each time step, conditional on s`h, conditional-IDS takes the action according to πh(·|s`h) = argmin ν∈∆A ( E` [ V Eh,π∗(s ` h)−QEh,π∗(s`h, Ah) ])2 I` ( χ; (Ah, QEh,π∗(s ` h, Ah)) ) , where Ah is sampled from ν. Conditional-IDS defined the information ratio per-step rather than per-episode such that it only needs to optimize over action space rather than the policy space. This offers great computational benefits but there is no regret guarantee for conditional-IDS. Recently, Hao et al. [2022] has demonstrated the theoretical limitation of conditional-IDS in contextual bandits. 4.2 Regularized IDS Computing an IDS policy practically usually involves two steps: 1. approximating the information ratio; 2. optimizing the information ratio. In bandits where the optimal policy is only a function of action space, optimizing Eq. (4.2) is a convex optimization problem and has an optimal solution with at most two non-zero components (Russo and Van Roy [2018, Proposition 6]). However in MDPs where the optimal policy is a mapping from the state space to the action space, vanilla-IDS needs to traverse two non-zero components over the full policy space which suggests the computational time might grow exponentially in S and H . To overcome this obstacle, we propose regularized-IDS that can be efficiently computed by any dynamic programming solver and enjoy the same regret bound as vanilla-IDS. At each episode `, regularized-IDS finds the policy: π`r-IDS = argmax π∈Π E`[V E1,π(s`1)] + λI` ( E ;Hπ`,H ) , (4.3) where λ > 0 is a tunable parameter. To approximate the objective function in Eq. (4.3), we assume the access to a posterior sampling oracle. Definition 4.6 (Posterior sampling oracle). Given a prior over E and history D`, the posterior sampling oracle, SAMP, is a subroutine which returns a sample from the posterior distribution P`(E). Multiple calls to the procedure result in independent samples. Remark 4.7. SAMP can be exactly obtained when the conjugate prior such as Dirichlet distribution is put on the transition kernel. When one uses neural nets to estimate the model, SAMP can be approximated by epistemic neural networks [Osband et al., 2021a], a general framework to quantify uncertainty for neural nets. The effectiveness of different epistemic neural networks such as deep ensemble, dropout and stochastic gradient MCMC has been examined empirically by Osband et al. [2021b]. We compute π`r-IDS in two steps: • Firstly, we prove an equivalent form of the objective function in Eq. (4.3) using the chain rule of mutual information. Define r′h(s, a) as an augmented reward function: r′h(s, a) = rh(s, a) + λ ∫ DKL ( P Eh (·|s, a)||P Ē` h (·|s, a) ) dP`(E) . Proposition 4.8. The following equivalence holds E`[V E1,π(s`1)] + λIπ` (E ;H`,H) = EĒ`π [ H∑ h=1 r′h(sh, ah) ] . The proof is deferred to Appendix A.4. • Given SAMP, the augmented reward r′h and the MDP Ē` can be well approximated by Monte Carlo sampling. Therefore, at each episode `, finding π`r-IDS is equivalent to find an optimal policy based on a computable and augmented MDP {P Ē`h , r′h}Hh=1. This can be solved efficiently by any dynamic programming solver such as value iteration or policy iteration. In the end, we show that π`r-IDS enjoys the same regret bound as vanilla-IDS when the tunable parameter is carefully chosen. Theorem 4.9. By choosing λ = √ LE[Γ∗]/I(E ;DL+1), we have BRL(π r-IDS) ≤ √ 3 2 LE[Γ∗]I(E ;DL+1) . The proof is deferred to Appendix A.5. Let M1,M2 be upper bounds of E[Γ∗] and I(E ;DL+1) respectively. In practice, we could conservatively choose λ = √ LM1/M2 such that BRL(πr-IDS) ≤√ 3/2M1M2L. From Lemmas 4.2 and 4.3 for tabular MDPs, we could choose M1 = 2SAH3 and M2 = 2S 2AH log(SLH). Remark 4.10. Russo and Van Roy [2018, Section 9.3] also considered a tunable version of IDS (for bandits) but took a square form of E`[V E1,π(s`1)]. While this makes no difference in bandits setting, this prevented us to use dynamic programming solver in RL setting. We are also inspired by Foster et al. [2021, Section 9.3] who studied the relationship between information ratio and Decision-Estimation Coefficient. 5 Learning a surrogate environment When the state space is large, the agent could take too much information to learn exactly the whole environment E which is reflected through I(E ;DL+1). A key observation is that different states may correspond to the same value function who eventually determines the behavior of the optimal policy. Based on the rate-distortion theory developed in Dong and Van Roy [2018], we reduce this redundancy and construct a surrogate environment that needs less information to learn. 5.1 A rate distortion approach The rate-distortion theory [Cover and Thomas, 1991] addresses the problem of determining the minimal number of bits per symbol that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding an expected distortion. It was recently introduced to bandits community to develop sharper bounds for linear bandits [Dong and Van Roy, 2018] and time-sensitive bandits [Russo and Van Roy, 2022]. We take a similar approach to construct a surrogate environment. Surrogate environment Suppose there exists a partition {Θk}Kk=1 over Θ such that for any E , E ′ ∈ Θk and any k ∈ [K], we have V E1,π∗E (s ` 1)− V E ′ 1,π∗E (s`1) ≤ ε , (5.1) where ε > 0 is the distortion tolerance and we write the optimal policy explicitly depending on the environment. Let ζ be a discrete random variable taking values in {1, . . . ,K} that indicates the region E lies such that ζ = k if and only if E ∈ Θk. Therefore, ζ can be viewed as a statistic of E and less informative than E if K is small. The next lemma shows the existence of the surrogate environment based on the partition. Lemma 5.1. For any partition {Θk}Kk=1 and any ` ∈ [L], we can construct a surrogate environment Ẽ∗` ∈ Θ which is a random MDP such that the law of Ẽ∗` only depends on ζ and E` [ V E1,π∗E (s ` 1)− V E1,π`TS(s ` 1) ] − E` [ V Ẽ∗` 1,π∗E (s`1)− V Ẽ∗` 1,π`TS (s`1) ] ≤ ε . (5.2) The concrete form of Ẽ∗` is deferred to Eq. (B.1) in the appendix. Surrogate IDS We refer the IDS based on the surrogate environment Ẽ∗` as surrogate-IDS that minimizes π`s-IDS = argmin π∈Π (E`[V E1,π∗(s`1)− V E1,π(s`1)]− ε)2 Iπ` (Ẽ∗` ;H`,H) , (5.3) for some parameters ε > 0 the will be chosen later. Denote the surrogate information ratio of TS as Γ̃ = max `∈[L] ( E` [ V Ẽ∗` 1,π∗(s ` 1)− V Ẽ∗` 1,π`TS (s`1) ])2 Iπ ` TS ` (Ẽ∗` ;H`,H) . We first derive a generic regret bound for surrogate IDS in the following theorem. Theorem 5.2. A generic regret bound for surrogate IDS is BRL(πs-IDS) ≤ √ E[Γ̃]I(ζ;DL+1)L+ Lε . We defer the proof to Appendix B.2. Given ζ, we have Ẽ∗` andH`,H are independent under the law of P`,π`s-IDS . By the data processing inequality, the proof uses the fact that Iπ ` s-IDS ` (Ẽ ∗ ` ;H`,H) ≤ I π`s-IDS ` (ζ;H`,H) . Comparing with regret bound of vanilla-IDS in Lemma 4.1, the regret bound of surrogate-IDS depends on the information gain about ζ rather than the whole environment E . If there exists a partition with small covering number K, the agent could pay less information to learn. The second term Lε is the price of distortions. In the following, we will bound the E[Γ̃] and I (ζ;DL+1) for tabular and linear MDPs separately. 5.2 Tabular MDPs We first show the existence of the partition required in Lemma 5.1 for tabular MDPs and an upper bound of the covering number K. Lemma 5.3. There exists a partition {Θεk}Kk=1 over Θ such that for any k ∈ [K] and E1, E2 ∈ Θεk, V E11,π∗E1 (s1)− V E21,π∗E1 (s1) ≤ ε , and the log covering number satisfies log(K) ≤ SAH log(4H2/ε). The proof is deferred to Lemma B.3. For tabular MDPs, the mutual information between ζ and the history can be bounded by I(ζ;DL+1) ≤ H(ζ) ≤ log(K) ≤ SAH log(4H2/ε) , where H(·) is the Shannon entropy. Comparing with Lemma 4.3 when learning the whole environment, learning the surrogate environment saves a factor of S through the bound of mutual information. Lemma 5.4. The surrogate information ratio for tabular MDPs is upper bounded by E[Γ̃] ≤ 2SAH3 . The proof is the same as Lemma 4.2 and thus is omitted. Putting Lemmas 5.3-5.4 yields an improved bound for tabular MDPs using surrogate-IDS. Theorem 5.5 (Improved regret bound for tabular MDPs). By choosing ε = 1/L, the regret bound of surrogate-IDS for tabular MDPs satisfies BRL(πs-IDS) ≤ √ 2S2A2H4L log(4HL) . For tabular MDPs, surrogate-IDS improves the regret bound of vanilla-IDS by a factor of S. However, it is still away from the minimax lower bound by a factor of √ SAH . We conjecture surrogate-IDS can achieve the optimal bound with a price of lower order term but leave it as a future work. Remark 5.6. Although the existence of Ẽ∗` is established using a constructive argument, finding Ẽ∗` needs a grid search and is not computationally efficient. 5.3 Linear MDPs We extend our analysis to linear MDPs that is a fundamental model to study the theoretical properties of linear function approximations in RL. All the proofs are deferred to Appendix B.4-B.5. Definition 5.7 (Linear MDPs [Yang and Wang, 2019, Jin et al., 2020]). Let φ : S × A → Rd be a feature map which assigns to each state-action pair a d-dimensional feature vector and assume ‖φ(s, a)‖2 ≤ 1. An MDP is called a linear MDP if for any h ∈ [H], there exist d unknown (signed) measures ψ1h, . . . , ψ d h over S , such that for any (s, a) ∈ S ×A, we have Ph(·|s, a) = 〈φ(s, a), ψh(·)〉 , where ψh = (ψ1h, . . . , ψ d h). Let us denote Θ Lin be the parameter space of linear MDPs and assume ‖ ∑ s′ ψh(s ′)‖ 2 ≤ Cψ . Note that the degree of freedom of linear MDPs still depends on S which implies that I(E ;DL+1) may still scale with S. Therefore, we must learn a surrogate environment rather than the whole environment for linear MDPs based on the current regret decomposition in Theorem 4.4. We first show the existence of a partition over linear MDPs with the log covering number only depending on the feature dimension d. Lemma 5.8. There exists a partition {Θεk}Kk=1 over ΘLin such that for any k ∈ [K] and E1, E2 ∈ Θk, V E11,π∗E1 (s1)− V E21,π∗E1 (s1) ≤ ε , and the log covering number satisfies log(K) ≤ Hd log(H2Cψ/ε+ 1). For linear MDPs, the mutual information can be bounded by I(ζ;DL+1) ≤ H(ζ) ≤ log(K) ≤ Hd log(H2Cψ/ε+ 1) . Lemma 5.9. The surrogate information ratio of linear MDPs is upper bounded by E[Γ̃] ≤ 4H3d . Theorem 5.10 (Regret bound for linear MDPs). By choosing ε = 1/L, the regret bound of surrogate IDS for linear MDPs satisfies BRL(πs-IDS) ≤ √ 4H4d2L log(H2CψL+ 1) + 1 . This Bayesian bound improves the O(d3/2H2 √ L) frequentist regret of LSVI-UCB [Jin et al., 2020] by a factor of √ d and matches the existing minimax lower bound O( √ H3d2L) [Zhou et al., 2021] up to a H factor. However, we would like to emphasize that this is not an apples-to-apples comparison, since in general frequentist regret bound is stronger than Bayesian regret bound. 5.4 Regret bounds for TS As a direct application of our rate-distortion analysis, we provide Bayesian regret bounds for Thompson sampling. Theorem 5.11. A generic regret bound for TS is BRL(πTS) ≤ √ E[Γ̃]I(ζ;DL+1)L+ Lε . This implies for tabular and linear MDPs, TS has the same regret bound as surrogate-IDS. Note that the computation of TS does not need to involve the surrogate environment Ẽ∗` so once the posterior sampling oracle is available, computing the policy is efficient. Howevern when the worstcase information ratio cannot be optimally bounded by the information ratio of TS, IDS demonstrates better regret bounds than TS, such as bandits with graph feedback [Hao et al., 2022] and sparse linear bandits [Hao et al., 2021]. 6 Conclusion In this paper, we derive the first prior-free Bayesian regret bounds for information-directed RL under tabular and linear MDPs. Theoretically, it will be of great interest to see if any version of IDS can achieve the O( √ SAH3L) minimax lower bounds for tabular MDPs. Acknowledgements We would like to thank Johannes Kirschner for helpful initial discussions.
1. What is the focus of the paper regarding Markov decision processes? 2. What are the strengths of the proposed approach, particularly in terms of regret bounds? 3. What are the weaknesses of the paper, especially regarding comparisons with other works and practical applications? 4. Do you have any questions or concerns about the assumptions made in the paper? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies information-directed sampling (IDS) for Markov decision processes. In particular, the authors prove the Bayesian regret bound of IDS for finite horizon tabular MDPs and linear MDPs. Strengths And Weaknesses The writing of the paper is clear and the proofs seem to be sound. While I appreciate the comparison of the regret bound with other methods in the literature, I do not think it is proper to compare the Bayesian regret bound derived in this paper with the frequentist regret bound in other papers. Special comments should be carefully made around any of these remarks. Questions Since S , A and r h are assumed to be known and deterministic, the expectation over the environment E is just the expectation over the prior of the transition probabilities? Limitations Line 51: it would be more convincing to include some cases where the best UCB-type algorithms are sub-optimal. One potential drawback of the paper is its lack of a specific example for calculating the information ratio and the sample complexity for estimating it, which makes it hard to understand the advantage of using IDS policy in practice. Line 177: conditionar => conditional
NIPS
Title Regret Bounds for Information-Directed Reinforcement Learning Abstract Information-directed sampling (IDS) has revealed its potential as a data-efficient algorithm [Lu et al., 2021] for reinforcement learning (RL). However, theoretical understanding of IDS for Markov Decision Processes (MDPs) is still limited. We develop novel information-theoretic tools to bound the information ratio and cumulative information gain about the learning target. Our theoretical results shed light on the importance of choosing the learning target such that the practitioners can balance the computation and regret bounds. As a consequence, we derive priorfree Bayesian regret bounds for vanilla-IDS which learns the whole environment under tabular finite-horizon MDPs. In addition, we propose a computationallyefficient regularized-IDS that maximizes an additive form rather than the ratio form and show that it enjoys the same regret bound as vanilla-IDS. With the aid of rate-distortion theory, we improve the regret bound by learning a surrogate, less informative environment. Furthermore, we extend our analysis to linear MDPs and prove similar regret bounds for Thompson sampling as a by-product. 1 Introduction Information-directed sampling (IDS) is a design principle proposed by [Russo and Van Roy, 2014, 2018] that optimizes the trade-off between information and regret. Comparing with other design principles such as UCB and Thompson sampling (TS), IDS can automatically adapt to different information-regret structures. As a result, IDS demonstrates impressive empirical performance [Russo and Van Roy, 2018] and outperforms UCB and TS in terms of asymptotic optimality [Kirschner et al., 2021] and minimax optimality in heteroscedastic bandits [Kirschner and Krause, 2018] and sparse linear bandits [Hao et al., 2021]. In the context of full RL, mutiple works have examined the empirical performance of IDS [Nikolov et al., 2018, Lu et al., 2021]. However, formal regret guarantee for IDS is still lacking. IDS minimizes a notion of information ratio that is the ratio of per-episode regret and information gain about the learning target. While different choices of the learning target could lead to different regret bounds and computational methods, the most natural choice is the whole environment and we name the corresponding IDS as vanilla-IDS. In this work, we prove the first prior-free Õ( √ S3A2H4L) Bayesian regret bound for vanilla-IDS, where S is the size of state space, A is the size of action space, H is the length of episodes and L is the number of episodes. Computationally, vanilla-IDS needs to optimize over the full policy space, which is not efficient in general. To facilitate the computation, we consider its regularized form, named regularized-IDS, that can be solved by any dynamic programming solver. By carefully 36th Conference on Neural Information Processing Systems (NeurIPS 2022). choosing the tunable parameter, we prove that regularized-IDS enjoys the same regret bound as vanilla-IDS. Although learning the whole environment offers certain computational advantages, the agent could take too much information to learn the whole environment exactly. A key observation is that different states may correspond to the same value function which eventually determines the behavior of the optimal policy. Through the rate-distortion theory, we construct a surrogate environment that is less informative to learn but enough to identify the optimal policy. As a result, we propose surrogate-IDS that takes the surrogate environment as the learning target and prove a sharper Õ( √ S2A2H4L) bound for tabular MDPs. In the end, we extend our analysis to linear MDPs where we must learn a surrogate environment due to potentially infinitely many states and derive a Õ(dH2 √ T ) Bayesian regret bound that matches the existing minimax lower bound up to a factor of H . As a by-product of our analysis, we also prove prior-free Bayesian regret bounds for TS under tabular and linear MDPs. 2 Related work In general, there are two ways to prove Bayesian regret bounds. The first is to introduce confidence sets such that the Bayesian regret bounds of TS can match the best possible frequentist regret bounds by UCB [Russo and Van Roy, 2014] and has been extended to RL by Osband et al. [2013], Osband and Van Roy [2014], Osband et al. [2019]. However, when the best possible bound for UCB is sub-optimal (for instance, sparse linear bandits [Hao et al., 2021]), this technique yields a sub-optimal Bayesian regret bound. In addition, this technique can only be used to analyze TS but not IDS. The second is to decompose the Bayesian regret into an information ratio term and a cumulative information gain term and bound them by tools from information theory [Russo and Van Roy, 2016]. This technique can be used to analyze both TS [Dong and Van Roy, 2018, Bubeck and Sellke, 2020] and IDS in bandits setting [Russo and Van Roy, 2014, Liu et al., 2018, Kirschner et al., 2020b, Hao et al., 2021, 2022], partial monitoring [Lattimore and Szepesvári, 2019, Kirschner et al., 2020a, Lattimore and Gyorgy, 2021] but not in RL as far as we know. One exception is Lu and Van Roy [2019], Lu [2020] who bounded the information ratio for a specific Dirichlet prior with additional assumptions. Frequentist regret bounds in episodic RL have received considerable attention recently. For tabular MDPs, several representative works include UCBVI [Azar et al., 2017], optimistic Q-learning [Jin et al., 2018], RLSVI [Russo, 2019], UCB-Advantage [Zhang et al., 2020], UCB-MQ [Ménard et al., 2021]. While our regret bounds are not state of the art, the primary goal of this paper is to broaden the set of efficient RL design principles known to satisfy √ T regret bounds. For linear or linear mixture MDPs, several representative works include LSVI-UCB [Jin et al., 2020], OPPO [Cai et al., 2020], UCRL-VTR [Ayoub et al., 2020, Zhou et al., 2021], RLSVI [Zanette et al., 2020]. Notably, Zhang [2021], Dann et al. [2021] derived minimax regret bounds for a variant of TS. Beyond linear cases, several works consider general function approximation based on Bellman rank [Jiang et al., 2017], eluder dimension [Wang et al., 2020], Bellman-eluder dimension [Jin et al., 2021] and bilinear class [Du et al., 2021]. It is worth mentioning the recent impressive work by Foster et al. [2021] who proposed a general Estimation-to-Decisions (E2D) design principle. Although motivated by different design principles, E2D shares the similar form as regularized-IDS. On one hand, Foster et al. [2021] mainly focuses on statistical complexity in a minimax sense, while we offer a specific computationallyefficient algorithm thanks to the chain rule of mutual information and independent priors and derive corresponding Bayesian regret bounds. On the other hand, while E2D tends to learn the whole environment, our theory in Section 5 suggests learning a surrogate environment could yield better regret bounds. 3 Preliminary Finite-horizon MDPs The environment is characterized by a finite-horizon time-inhomogeneous MDP, which is a tuple E = (S,A, H, {Ph}Hh=1, {rh}Hh=1), where S is the countable state space with |S| = S, A is the finite action space with |A| = A, H is the episode length, Ph : S × A → ∆S is the transition probability kernel and rh : S ×A → [0, 1] is the reward function. For a finite set S , let ∆S be the set of probability distributions over S. We assume S, A, rh are known and deterministic while the transition probability kernel is unknown and random. Throughout the paper, we may write Ph and rh explicitly depend on E when necessary. Let Θh = [0, 1]S×A×S be the parameter space of Ph and Θ = Θ1 × · · · ×ΘH be the full parameter space. We assume ρh is the prior probability measure for Ph on Θh with Borel σ-algebra and ρ = ρ1 ⊗ · · · ⊗ ρH as the product prior probability measure for the whole environment on Θ with Borel σ-algebra. This ensures the priors over different layers are independent and the prior is assumed to be known to the learner. Interaction protocol An agent interacts with a finite-horizon MDP as follows. The initial state s`1 is assumed to be fixed over episodes. In each episode ` ∈ [L] and each layer h ∈ [H], the agent observes a state s`h, takes an action a ` h, and receives a reward r ` h. Then, the environment evolves to a random next state s`h+1 according to distribution Ph(·|s`h, a`h). The episode terminates when sH+1 is reached and is reset to the initial state. Denote H`,h as the history of episode ` up to layer h, e.g., H`,h = (s`1, a`1, r`1, . . . , s`h, a`h, r`h) and the set of such possible history is Ωh = ∏h i=1(S × A× [0, 1]) . Let D` = (H1,H , . . . ,H`−1,H) as the entire history up to episode ` with D1 = ∅. A policy π is a collection of (possibly randomised) mappings (π1, . . . , πH) where each πh maps an element from Ωh−1×S to ∆(A) and Π is the whole policy class. A stationary policy chooses actions based on only the current state and current layer. The set of such policies is denoted by ΠS where we denote πh(a|s) as the probability that the agent chooses action a at state s and layer h. Value function For each h ∈ [H] and a policy π, the value function V Eh,π : S → R is defined as the expected value of cumulative rewards received under policy π when starting from an arbitrary state at hth layer; that is, V Eh,π(s) := EEπ [ H∑ h′=h rh′(sh′ , ah′) ∣∣∣∣sh = s ] , where EEπ denotes the expectation over the sample path generated under policy π and environment E . We adapt the convention that V EH+1,π(·) = 0. There always exists an optimal policy π∗ which gives the optimal value V Eh,π∗(s) = maxπ∈ΠS V E h,π(s) for all s ∈ S and h ∈ [H]. Note that in the Bayesian setting, π∗ is a function of E so it is also a random variable. In addition, we define the action-value function as follows: QEh,π(s, a) := EEπ [ H∑ h′=h rh′(sh′ , ah′) ∣∣∣∣sh = s, ah = a ] , which satisfies the Bellman equation: QEh,π(s, a) = rh(s, a) + Es′∼Ph(·|s,a)[V Eh+1,π(s′)]. Furthermore, we denote the state-action occupancy measure as dEh,π(s, a) = PEπ(sh = s, ah = a) , where we denote PEπ as the law of the sample path generated under policy π and environment E . Bayesian regret The agent interacts with the environment for L episodes and the total number of steps is T = LH . The expected cumulative regret of an algorithm π = {π`}L`=1 with respect to an environment E is defined as RL(E , π) = E [ L∑ `=1 ( V E1,π∗(s ` 1)− V E1,π`(s ` 1) )] , where the expectation is taken with respect to the randomness of π`. The Bayesian regret then is defined as BRL(π) = E[RL(E , π)] , where the expectation is taken with respect to the prior distribution of E . At each episode, TS finds π`TS = argmax π∈Π V E`1,π(s ` 1) , where E` is a sample from the posterior distribution of E , e.g., E` ∼ P(E ∈ ·|D`) . Notations Let (Ω,F ,P) as a measurable space. A random variable X is a measureable function X : Ω → E from a set of possible outcomes Ω to a measurable space E. Now P(X ∈ ·) is a probability measure that maps from F to [0, 1]. D` is another random variable from Ω to a measurable space Y . Then P(X ∈ ·|D`) is a probability kernel that maps from Ω×F → [0, 1]. We write P`(·) = P(·|D`), E`[·] = E[·|D`] and also define the conditional mutual information I`(X;Y ) = DKL(P((X,Y ) ∈ ·|D`)||P(X ∈ ·|D`) ⊗ P(Y ∈ ·|D`)). For a random variable χ we define: Iπ` (χ;H`,h) = DKL(P`,π((χ,H`,h) ∈ ·)||P`,π(χ ∈ ·)⊗ P`,π(H`,h ∈ ·)) , where P`,π is the law of χ and the history induced by policy π interacting with a sample from the posterior distribution of E given D`. We define Ē` as the mean MDP where for each state-action pair (s, a), P Ē`h (·|s, a) = E`[P Eh (·|s, a)] is the mean of posterior measure. 4 Learning the whole environment The core design of IDS for RL relies on a notion of information ratio. The information ratio for a policy π at episode ` is defined as Γ`(π, χ) := (E`[V E1,π∗(s`1)− V E1,π(s`1)])2 Iπ` (χ;H`,H) , (4.1) where χ is the learning target to prioritize information sought by the agent. The choice of χ plays a crucial role in designing the IDS and could lead to different regret bounds and computational methods. We first consider the most natural choice of χ which is the whole environment E . 4.1 Vanilla IDS Vanilla-IDS takes the whole environment E as the learning target and at the beginning of each episode, the agent computes a stochastic policy: π`IDS = argmin π∈Π [ Γ`(π) := (E`[V E1,π∗(s`1)− V E1,π(s`1)])2 Iπ` (E ;H`,H) ] . (4.2) Define the worst-case information ratio Γ∗ such that Γ`(π`IDS) ≤ Γ∗ for any ` ∈ [L] almost surely. The next theorem derives a generic regret bound for vanilla-IDS in terms of Γ∗ and the mutual information between E and the history. Theorem 4.1. A generic regret bound for vanilla-IDS is BRL(πIDS) ≤ √ E[Γ∗]I (E ;DL+1)L . The proof is deferred to Appendix A.1 and follows standard information-theoretical regret decomposition and the chain rule of mutual information that originally was exploited by Russo and Van Roy [2014]. For tabular MDPs, it remains to bound the E[Γ∗] and I (E ;DL+1) separately. Lemma 4.2. The worst-case information ratio for tabular MDPs is upper bounded by E[Γ∗] ≤ 2SAH3 . We sketch the main steps of the proof and defer the full proof to Appendix A.2. Proof sketch. Since vanilla-IDS minimizes the information ratio over all the policies, we can bound the information ratio of vanilla-IDS by the information ratio of TS. • Step one. Our regret decomposition uses the value function based on Ē` as a bridge: E` [ V E1,π∗(s ` 1)− V E1,π`TS(s ` 1) ] = E` [ V E1,π∗(s ` 1)− V Ē` 1,π`TS (s`1) ] ︸ ︷︷ ︸ I1 +E` [ V Ē` 1,π`TS (s`1)− V E1,π`TS(s ` 1) ] ︸ ︷︷ ︸ I2 . Note that conditional on D`, the law of π`TS is the same as the law of π∗ and both π∗ and π`TS are independent of Ē`. This implies E`[V Ē` 1,π`TS (s`1)] = E`[V Ē` 1,π∗(s ` 1)]. • Step two. Denote ∆Eh(s, a) = Es′∼PEh (·|s,a)[V E h+1,π∗(s ′)] − Es′∼P Ēh (·|s,a)[V E h+1,π∗(s ′)] as the value function difference. Inspired by Foster et al. [2021], with the use of state-action occupancy measure and Lemma D.3, we can derive I1 = H∑ h=1 E` ∑ (s,a) dĒ`h,π∗(s, a) (E`[dĒ`h,π∗(s, a)])1/2 (E`[dĒ`h,π∗(s, a)]) 1/2∆Eh(s, a) . Applying the Cauchy–Schwarz inequality and Pinsker’s inequality (see Eqs. (A.2)-(A.4) in the appendix for details), we can obtain I1 ≤ √ SAH3 ( H∑ h=1 E` [ EĒ` π`TS [ 1 2 DKL ( P Eh (·|s`h, a`h)||P Ē` h (·|s ` h, a ` h) )]])1/2 , where we interchange π`TS and π ∗ again and EĒ` π`TS is taken with respect to s`h, a ` h and E` is taken with respect to π`TS and E . • Step three. It remains to establish the following equivalence of above KL-divergence and the information gain (Lemma A.1): H∑ h=1 E` [ EĒ` π`TS [ DKL ( P Eh (·|sh, ah)||P Ē` h (·|sh, ah) )]] = Iπ ` TS ` (E ;H`,H) . A crucial step is to use the linearity of the expectation and the independence of priors over different layers (from the product prior as we assumed in Section 3) to show P`,π`TS(sh−1 = s, ah−1 = a) = P Ē` π`TS (sh−1 = s, ah−1 = a) . Combining Steps 1-3, we can reach the conclusion and the bound for I2 is similar. The next lemma directly bounds the mutual information for tabular MDPs. Lemma 4.3. The mutual information can be bounded by I(E ;DL+1) ≤ 2S2AH log (SLH) . The proof relies on the construction of Bayes mixture density and a covering set for KL-divergence and is deferred to Appendix A.3. Combining Theorem 4.1, Lemmas 4.2 and 4.3 yields the following: Theorem 4.4 (Regret bound for tabular MDPs). Suppose πIDS = {π`IDS}L`=1 is the vanilla IDS policy. The following Bayesian regret bound holds for tabular MDPs BRL(πIDS) ≤ √ 8S3A2H4L log(SLH) . Although this regret bound is sub-optimal, this is the first sub-linear prior-free Bayesian regret bound for vanilla-IDS. Remark 4.5. It is worth mentioning that Lu and Van Roy [2019], Lu [2020] also derived Bayesian regret bound using information-theoretical tools but only hold for a specific Dirichlet prior as well other distribution-specific assumptions. Their proof heavily exploits the property of Dirichlet distribution and can not easily be extended to prior-free regret bounds. In the context of finite-horizon MDPs, Lu et al. [2021] considered a conditional-IDS such that at each time step, conditional on s`h, conditional-IDS takes the action according to πh(·|s`h) = argmin ν∈∆A ( E` [ V Eh,π∗(s ` h)−QEh,π∗(s`h, Ah) ])2 I` ( χ; (Ah, QEh,π∗(s ` h, Ah)) ) , where Ah is sampled from ν. Conditional-IDS defined the information ratio per-step rather than per-episode such that it only needs to optimize over action space rather than the policy space. This offers great computational benefits but there is no regret guarantee for conditional-IDS. Recently, Hao et al. [2022] has demonstrated the theoretical limitation of conditional-IDS in contextual bandits. 4.2 Regularized IDS Computing an IDS policy practically usually involves two steps: 1. approximating the information ratio; 2. optimizing the information ratio. In bandits where the optimal policy is only a function of action space, optimizing Eq. (4.2) is a convex optimization problem and has an optimal solution with at most two non-zero components (Russo and Van Roy [2018, Proposition 6]). However in MDPs where the optimal policy is a mapping from the state space to the action space, vanilla-IDS needs to traverse two non-zero components over the full policy space which suggests the computational time might grow exponentially in S and H . To overcome this obstacle, we propose regularized-IDS that can be efficiently computed by any dynamic programming solver and enjoy the same regret bound as vanilla-IDS. At each episode `, regularized-IDS finds the policy: π`r-IDS = argmax π∈Π E`[V E1,π(s`1)] + λI` ( E ;Hπ`,H ) , (4.3) where λ > 0 is a tunable parameter. To approximate the objective function in Eq. (4.3), we assume the access to a posterior sampling oracle. Definition 4.6 (Posterior sampling oracle). Given a prior over E and history D`, the posterior sampling oracle, SAMP, is a subroutine which returns a sample from the posterior distribution P`(E). Multiple calls to the procedure result in independent samples. Remark 4.7. SAMP can be exactly obtained when the conjugate prior such as Dirichlet distribution is put on the transition kernel. When one uses neural nets to estimate the model, SAMP can be approximated by epistemic neural networks [Osband et al., 2021a], a general framework to quantify uncertainty for neural nets. The effectiveness of different epistemic neural networks such as deep ensemble, dropout and stochastic gradient MCMC has been examined empirically by Osband et al. [2021b]. We compute π`r-IDS in two steps: • Firstly, we prove an equivalent form of the objective function in Eq. (4.3) using the chain rule of mutual information. Define r′h(s, a) as an augmented reward function: r′h(s, a) = rh(s, a) + λ ∫ DKL ( P Eh (·|s, a)||P Ē` h (·|s, a) ) dP`(E) . Proposition 4.8. The following equivalence holds E`[V E1,π(s`1)] + λIπ` (E ;H`,H) = EĒ`π [ H∑ h=1 r′h(sh, ah) ] . The proof is deferred to Appendix A.4. • Given SAMP, the augmented reward r′h and the MDP Ē` can be well approximated by Monte Carlo sampling. Therefore, at each episode `, finding π`r-IDS is equivalent to find an optimal policy based on a computable and augmented MDP {P Ē`h , r′h}Hh=1. This can be solved efficiently by any dynamic programming solver such as value iteration or policy iteration. In the end, we show that π`r-IDS enjoys the same regret bound as vanilla-IDS when the tunable parameter is carefully chosen. Theorem 4.9. By choosing λ = √ LE[Γ∗]/I(E ;DL+1), we have BRL(π r-IDS) ≤ √ 3 2 LE[Γ∗]I(E ;DL+1) . The proof is deferred to Appendix A.5. Let M1,M2 be upper bounds of E[Γ∗] and I(E ;DL+1) respectively. In practice, we could conservatively choose λ = √ LM1/M2 such that BRL(πr-IDS) ≤√ 3/2M1M2L. From Lemmas 4.2 and 4.3 for tabular MDPs, we could choose M1 = 2SAH3 and M2 = 2S 2AH log(SLH). Remark 4.10. Russo and Van Roy [2018, Section 9.3] also considered a tunable version of IDS (for bandits) but took a square form of E`[V E1,π(s`1)]. While this makes no difference in bandits setting, this prevented us to use dynamic programming solver in RL setting. We are also inspired by Foster et al. [2021, Section 9.3] who studied the relationship between information ratio and Decision-Estimation Coefficient. 5 Learning a surrogate environment When the state space is large, the agent could take too much information to learn exactly the whole environment E which is reflected through I(E ;DL+1). A key observation is that different states may correspond to the same value function who eventually determines the behavior of the optimal policy. Based on the rate-distortion theory developed in Dong and Van Roy [2018], we reduce this redundancy and construct a surrogate environment that needs less information to learn. 5.1 A rate distortion approach The rate-distortion theory [Cover and Thomas, 1991] addresses the problem of determining the minimal number of bits per symbol that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding an expected distortion. It was recently introduced to bandits community to develop sharper bounds for linear bandits [Dong and Van Roy, 2018] and time-sensitive bandits [Russo and Van Roy, 2022]. We take a similar approach to construct a surrogate environment. Surrogate environment Suppose there exists a partition {Θk}Kk=1 over Θ such that for any E , E ′ ∈ Θk and any k ∈ [K], we have V E1,π∗E (s ` 1)− V E ′ 1,π∗E (s`1) ≤ ε , (5.1) where ε > 0 is the distortion tolerance and we write the optimal policy explicitly depending on the environment. Let ζ be a discrete random variable taking values in {1, . . . ,K} that indicates the region E lies such that ζ = k if and only if E ∈ Θk. Therefore, ζ can be viewed as a statistic of E and less informative than E if K is small. The next lemma shows the existence of the surrogate environment based on the partition. Lemma 5.1. For any partition {Θk}Kk=1 and any ` ∈ [L], we can construct a surrogate environment Ẽ∗` ∈ Θ which is a random MDP such that the law of Ẽ∗` only depends on ζ and E` [ V E1,π∗E (s ` 1)− V E1,π`TS(s ` 1) ] − E` [ V Ẽ∗` 1,π∗E (s`1)− V Ẽ∗` 1,π`TS (s`1) ] ≤ ε . (5.2) The concrete form of Ẽ∗` is deferred to Eq. (B.1) in the appendix. Surrogate IDS We refer the IDS based on the surrogate environment Ẽ∗` as surrogate-IDS that minimizes π`s-IDS = argmin π∈Π (E`[V E1,π∗(s`1)− V E1,π(s`1)]− ε)2 Iπ` (Ẽ∗` ;H`,H) , (5.3) for some parameters ε > 0 the will be chosen later. Denote the surrogate information ratio of TS as Γ̃ = max `∈[L] ( E` [ V Ẽ∗` 1,π∗(s ` 1)− V Ẽ∗` 1,π`TS (s`1) ])2 Iπ ` TS ` (Ẽ∗` ;H`,H) . We first derive a generic regret bound for surrogate IDS in the following theorem. Theorem 5.2. A generic regret bound for surrogate IDS is BRL(πs-IDS) ≤ √ E[Γ̃]I(ζ;DL+1)L+ Lε . We defer the proof to Appendix B.2. Given ζ, we have Ẽ∗` andH`,H are independent under the law of P`,π`s-IDS . By the data processing inequality, the proof uses the fact that Iπ ` s-IDS ` (Ẽ ∗ ` ;H`,H) ≤ I π`s-IDS ` (ζ;H`,H) . Comparing with regret bound of vanilla-IDS in Lemma 4.1, the regret bound of surrogate-IDS depends on the information gain about ζ rather than the whole environment E . If there exists a partition with small covering number K, the agent could pay less information to learn. The second term Lε is the price of distortions. In the following, we will bound the E[Γ̃] and I (ζ;DL+1) for tabular and linear MDPs separately. 5.2 Tabular MDPs We first show the existence of the partition required in Lemma 5.1 for tabular MDPs and an upper bound of the covering number K. Lemma 5.3. There exists a partition {Θεk}Kk=1 over Θ such that for any k ∈ [K] and E1, E2 ∈ Θεk, V E11,π∗E1 (s1)− V E21,π∗E1 (s1) ≤ ε , and the log covering number satisfies log(K) ≤ SAH log(4H2/ε). The proof is deferred to Lemma B.3. For tabular MDPs, the mutual information between ζ and the history can be bounded by I(ζ;DL+1) ≤ H(ζ) ≤ log(K) ≤ SAH log(4H2/ε) , where H(·) is the Shannon entropy. Comparing with Lemma 4.3 when learning the whole environment, learning the surrogate environment saves a factor of S through the bound of mutual information. Lemma 5.4. The surrogate information ratio for tabular MDPs is upper bounded by E[Γ̃] ≤ 2SAH3 . The proof is the same as Lemma 4.2 and thus is omitted. Putting Lemmas 5.3-5.4 yields an improved bound for tabular MDPs using surrogate-IDS. Theorem 5.5 (Improved regret bound for tabular MDPs). By choosing ε = 1/L, the regret bound of surrogate-IDS for tabular MDPs satisfies BRL(πs-IDS) ≤ √ 2S2A2H4L log(4HL) . For tabular MDPs, surrogate-IDS improves the regret bound of vanilla-IDS by a factor of S. However, it is still away from the minimax lower bound by a factor of √ SAH . We conjecture surrogate-IDS can achieve the optimal bound with a price of lower order term but leave it as a future work. Remark 5.6. Although the existence of Ẽ∗` is established using a constructive argument, finding Ẽ∗` needs a grid search and is not computationally efficient. 5.3 Linear MDPs We extend our analysis to linear MDPs that is a fundamental model to study the theoretical properties of linear function approximations in RL. All the proofs are deferred to Appendix B.4-B.5. Definition 5.7 (Linear MDPs [Yang and Wang, 2019, Jin et al., 2020]). Let φ : S × A → Rd be a feature map which assigns to each state-action pair a d-dimensional feature vector and assume ‖φ(s, a)‖2 ≤ 1. An MDP is called a linear MDP if for any h ∈ [H], there exist d unknown (signed) measures ψ1h, . . . , ψ d h over S , such that for any (s, a) ∈ S ×A, we have Ph(·|s, a) = 〈φ(s, a), ψh(·)〉 , where ψh = (ψ1h, . . . , ψ d h). Let us denote Θ Lin be the parameter space of linear MDPs and assume ‖ ∑ s′ ψh(s ′)‖ 2 ≤ Cψ . Note that the degree of freedom of linear MDPs still depends on S which implies that I(E ;DL+1) may still scale with S. Therefore, we must learn a surrogate environment rather than the whole environment for linear MDPs based on the current regret decomposition in Theorem 4.4. We first show the existence of a partition over linear MDPs with the log covering number only depending on the feature dimension d. Lemma 5.8. There exists a partition {Θεk}Kk=1 over ΘLin such that for any k ∈ [K] and E1, E2 ∈ Θk, V E11,π∗E1 (s1)− V E21,π∗E1 (s1) ≤ ε , and the log covering number satisfies log(K) ≤ Hd log(H2Cψ/ε+ 1). For linear MDPs, the mutual information can be bounded by I(ζ;DL+1) ≤ H(ζ) ≤ log(K) ≤ Hd log(H2Cψ/ε+ 1) . Lemma 5.9. The surrogate information ratio of linear MDPs is upper bounded by E[Γ̃] ≤ 4H3d . Theorem 5.10 (Regret bound for linear MDPs). By choosing ε = 1/L, the regret bound of surrogate IDS for linear MDPs satisfies BRL(πs-IDS) ≤ √ 4H4d2L log(H2CψL+ 1) + 1 . This Bayesian bound improves the O(d3/2H2 √ L) frequentist regret of LSVI-UCB [Jin et al., 2020] by a factor of √ d and matches the existing minimax lower bound O( √ H3d2L) [Zhou et al., 2021] up to a H factor. However, we would like to emphasize that this is not an apples-to-apples comparison, since in general frequentist regret bound is stronger than Bayesian regret bound. 5.4 Regret bounds for TS As a direct application of our rate-distortion analysis, we provide Bayesian regret bounds for Thompson sampling. Theorem 5.11. A generic regret bound for TS is BRL(πTS) ≤ √ E[Γ̃]I(ζ;DL+1)L+ Lε . This implies for tabular and linear MDPs, TS has the same regret bound as surrogate-IDS. Note that the computation of TS does not need to involve the surrogate environment Ẽ∗` so once the posterior sampling oracle is available, computing the policy is efficient. Howevern when the worstcase information ratio cannot be optimally bounded by the information ratio of TS, IDS demonstrates better regret bounds than TS, such as bandits with graph feedback [Hao et al., 2022] and sparse linear bandits [Hao et al., 2021]. 6 Conclusion In this paper, we derive the first prior-free Bayesian regret bounds for information-directed RL under tabular and linear MDPs. Theoretically, it will be of great interest to see if any version of IDS can achieve the O( √ SAH3L) minimax lower bounds for tabular MDPs. Acknowledgements We would like to thank Johannes Kirschner for helpful initial discussions.
1. What is the focus and contribution of the paper regarding information directed sampling in MDPs? 2. What are the strengths of the proposed approach, particularly in terms of its generality and efficiency? 3. What are the weaknesses of the paper, especially regarding its technical contribution and comparisons with other works? 4. Do you have any concerns or questions regarding the suboptimality of the regret and the computational efficiency of the refined bounds? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents general guarantees for information directed sampling in MDPs. As it stood, prior work had only understood Thompson-Sampling inspired approaches in frequentist settings, or provided bounds for specific priors, but this is the first work to analyze proper IDS for MDPs with no restrictions on the prior. Strengths And Weaknesses Strengths: the bounds in this paper apply to general priors, some of the information bounds based on the method of mixtures may be of independent interest, a regularized variant of IDS can be implemented efficiently given access to a natural sampling oracle. In addition, the paper is generally well written and well explained, despite a couple of minor grammatical issues. Authors do a great job explaining what the essential ingredients are of their proofs. The refinements due to rate distortion theory were also a nice addition. Weaknesses: I should preface this by saying that I am not an expert on Bayesian regret bounds; hence it is hard for me to gauge the technical contribution of this paper. However, it does seem that the techniques and arguments are rather standard, and I think it would be useful for the authors do explain not just the results derived in prior works, but to give a sense of how common (or unique) their techniques are in comparison to the rest of the Bayesian regret community. In addition, it seems that the bounds here do not match what is attainable in the (harder) frequentist setting. This makes me wonder - either is (a) the analysis loose, or (b) can one derive lower bounds to show that IDS (without modification) necessarily suffers this worse sampling complexity? Even some numerical experiments demonstrating scaling with S would be illustrative here. Another weakness is that the sharper guarantees required computing an explicit cover, which is computationally prohibitive. I would have been more excited if the refined regret were attainable with computationally efficient algorithms. Questions Do the authors conjecture that the suboptimality of their regret is a limitation of the analysis, or the algorithm? To they have any analysis or experimental evidence to shed light on this? Moreover, have the authors thought about what a more computationally efficient algorithm which uses the MDP cover would look like? Limitations As noted, regret bounds are suboptimal, refined bounds are not computationally efficient.
NIPS
Title Regret Bounds for Information-Directed Reinforcement Learning Abstract Information-directed sampling (IDS) has revealed its potential as a data-efficient algorithm [Lu et al., 2021] for reinforcement learning (RL). However, theoretical understanding of IDS for Markov Decision Processes (MDPs) is still limited. We develop novel information-theoretic tools to bound the information ratio and cumulative information gain about the learning target. Our theoretical results shed light on the importance of choosing the learning target such that the practitioners can balance the computation and regret bounds. As a consequence, we derive priorfree Bayesian regret bounds for vanilla-IDS which learns the whole environment under tabular finite-horizon MDPs. In addition, we propose a computationallyefficient regularized-IDS that maximizes an additive form rather than the ratio form and show that it enjoys the same regret bound as vanilla-IDS. With the aid of rate-distortion theory, we improve the regret bound by learning a surrogate, less informative environment. Furthermore, we extend our analysis to linear MDPs and prove similar regret bounds for Thompson sampling as a by-product. 1 Introduction Information-directed sampling (IDS) is a design principle proposed by [Russo and Van Roy, 2014, 2018] that optimizes the trade-off between information and regret. Comparing with other design principles such as UCB and Thompson sampling (TS), IDS can automatically adapt to different information-regret structures. As a result, IDS demonstrates impressive empirical performance [Russo and Van Roy, 2018] and outperforms UCB and TS in terms of asymptotic optimality [Kirschner et al., 2021] and minimax optimality in heteroscedastic bandits [Kirschner and Krause, 2018] and sparse linear bandits [Hao et al., 2021]. In the context of full RL, mutiple works have examined the empirical performance of IDS [Nikolov et al., 2018, Lu et al., 2021]. However, formal regret guarantee for IDS is still lacking. IDS minimizes a notion of information ratio that is the ratio of per-episode regret and information gain about the learning target. While different choices of the learning target could lead to different regret bounds and computational methods, the most natural choice is the whole environment and we name the corresponding IDS as vanilla-IDS. In this work, we prove the first prior-free Õ( √ S3A2H4L) Bayesian regret bound for vanilla-IDS, where S is the size of state space, A is the size of action space, H is the length of episodes and L is the number of episodes. Computationally, vanilla-IDS needs to optimize over the full policy space, which is not efficient in general. To facilitate the computation, we consider its regularized form, named regularized-IDS, that can be solved by any dynamic programming solver. By carefully 36th Conference on Neural Information Processing Systems (NeurIPS 2022). choosing the tunable parameter, we prove that regularized-IDS enjoys the same regret bound as vanilla-IDS. Although learning the whole environment offers certain computational advantages, the agent could take too much information to learn the whole environment exactly. A key observation is that different states may correspond to the same value function which eventually determines the behavior of the optimal policy. Through the rate-distortion theory, we construct a surrogate environment that is less informative to learn but enough to identify the optimal policy. As a result, we propose surrogate-IDS that takes the surrogate environment as the learning target and prove a sharper Õ( √ S2A2H4L) bound for tabular MDPs. In the end, we extend our analysis to linear MDPs where we must learn a surrogate environment due to potentially infinitely many states and derive a Õ(dH2 √ T ) Bayesian regret bound that matches the existing minimax lower bound up to a factor of H . As a by-product of our analysis, we also prove prior-free Bayesian regret bounds for TS under tabular and linear MDPs. 2 Related work In general, there are two ways to prove Bayesian regret bounds. The first is to introduce confidence sets such that the Bayesian regret bounds of TS can match the best possible frequentist regret bounds by UCB [Russo and Van Roy, 2014] and has been extended to RL by Osband et al. [2013], Osband and Van Roy [2014], Osband et al. [2019]. However, when the best possible bound for UCB is sub-optimal (for instance, sparse linear bandits [Hao et al., 2021]), this technique yields a sub-optimal Bayesian regret bound. In addition, this technique can only be used to analyze TS but not IDS. The second is to decompose the Bayesian regret into an information ratio term and a cumulative information gain term and bound them by tools from information theory [Russo and Van Roy, 2016]. This technique can be used to analyze both TS [Dong and Van Roy, 2018, Bubeck and Sellke, 2020] and IDS in bandits setting [Russo and Van Roy, 2014, Liu et al., 2018, Kirschner et al., 2020b, Hao et al., 2021, 2022], partial monitoring [Lattimore and Szepesvári, 2019, Kirschner et al., 2020a, Lattimore and Gyorgy, 2021] but not in RL as far as we know. One exception is Lu and Van Roy [2019], Lu [2020] who bounded the information ratio for a specific Dirichlet prior with additional assumptions. Frequentist regret bounds in episodic RL have received considerable attention recently. For tabular MDPs, several representative works include UCBVI [Azar et al., 2017], optimistic Q-learning [Jin et al., 2018], RLSVI [Russo, 2019], UCB-Advantage [Zhang et al., 2020], UCB-MQ [Ménard et al., 2021]. While our regret bounds are not state of the art, the primary goal of this paper is to broaden the set of efficient RL design principles known to satisfy √ T regret bounds. For linear or linear mixture MDPs, several representative works include LSVI-UCB [Jin et al., 2020], OPPO [Cai et al., 2020], UCRL-VTR [Ayoub et al., 2020, Zhou et al., 2021], RLSVI [Zanette et al., 2020]. Notably, Zhang [2021], Dann et al. [2021] derived minimax regret bounds for a variant of TS. Beyond linear cases, several works consider general function approximation based on Bellman rank [Jiang et al., 2017], eluder dimension [Wang et al., 2020], Bellman-eluder dimension [Jin et al., 2021] and bilinear class [Du et al., 2021]. It is worth mentioning the recent impressive work by Foster et al. [2021] who proposed a general Estimation-to-Decisions (E2D) design principle. Although motivated by different design principles, E2D shares the similar form as regularized-IDS. On one hand, Foster et al. [2021] mainly focuses on statistical complexity in a minimax sense, while we offer a specific computationallyefficient algorithm thanks to the chain rule of mutual information and independent priors and derive corresponding Bayesian regret bounds. On the other hand, while E2D tends to learn the whole environment, our theory in Section 5 suggests learning a surrogate environment could yield better regret bounds. 3 Preliminary Finite-horizon MDPs The environment is characterized by a finite-horizon time-inhomogeneous MDP, which is a tuple E = (S,A, H, {Ph}Hh=1, {rh}Hh=1), where S is the countable state space with |S| = S, A is the finite action space with |A| = A, H is the episode length, Ph : S × A → ∆S is the transition probability kernel and rh : S ×A → [0, 1] is the reward function. For a finite set S , let ∆S be the set of probability distributions over S. We assume S, A, rh are known and deterministic while the transition probability kernel is unknown and random. Throughout the paper, we may write Ph and rh explicitly depend on E when necessary. Let Θh = [0, 1]S×A×S be the parameter space of Ph and Θ = Θ1 × · · · ×ΘH be the full parameter space. We assume ρh is the prior probability measure for Ph on Θh with Borel σ-algebra and ρ = ρ1 ⊗ · · · ⊗ ρH as the product prior probability measure for the whole environment on Θ with Borel σ-algebra. This ensures the priors over different layers are independent and the prior is assumed to be known to the learner. Interaction protocol An agent interacts with a finite-horizon MDP as follows. The initial state s`1 is assumed to be fixed over episodes. In each episode ` ∈ [L] and each layer h ∈ [H], the agent observes a state s`h, takes an action a ` h, and receives a reward r ` h. Then, the environment evolves to a random next state s`h+1 according to distribution Ph(·|s`h, a`h). The episode terminates when sH+1 is reached and is reset to the initial state. Denote H`,h as the history of episode ` up to layer h, e.g., H`,h = (s`1, a`1, r`1, . . . , s`h, a`h, r`h) and the set of such possible history is Ωh = ∏h i=1(S × A× [0, 1]) . Let D` = (H1,H , . . . ,H`−1,H) as the entire history up to episode ` with D1 = ∅. A policy π is a collection of (possibly randomised) mappings (π1, . . . , πH) where each πh maps an element from Ωh−1×S to ∆(A) and Π is the whole policy class. A stationary policy chooses actions based on only the current state and current layer. The set of such policies is denoted by ΠS where we denote πh(a|s) as the probability that the agent chooses action a at state s and layer h. Value function For each h ∈ [H] and a policy π, the value function V Eh,π : S → R is defined as the expected value of cumulative rewards received under policy π when starting from an arbitrary state at hth layer; that is, V Eh,π(s) := EEπ [ H∑ h′=h rh′(sh′ , ah′) ∣∣∣∣sh = s ] , where EEπ denotes the expectation over the sample path generated under policy π and environment E . We adapt the convention that V EH+1,π(·) = 0. There always exists an optimal policy π∗ which gives the optimal value V Eh,π∗(s) = maxπ∈ΠS V E h,π(s) for all s ∈ S and h ∈ [H]. Note that in the Bayesian setting, π∗ is a function of E so it is also a random variable. In addition, we define the action-value function as follows: QEh,π(s, a) := EEπ [ H∑ h′=h rh′(sh′ , ah′) ∣∣∣∣sh = s, ah = a ] , which satisfies the Bellman equation: QEh,π(s, a) = rh(s, a) + Es′∼Ph(·|s,a)[V Eh+1,π(s′)]. Furthermore, we denote the state-action occupancy measure as dEh,π(s, a) = PEπ(sh = s, ah = a) , where we denote PEπ as the law of the sample path generated under policy π and environment E . Bayesian regret The agent interacts with the environment for L episodes and the total number of steps is T = LH . The expected cumulative regret of an algorithm π = {π`}L`=1 with respect to an environment E is defined as RL(E , π) = E [ L∑ `=1 ( V E1,π∗(s ` 1)− V E1,π`(s ` 1) )] , where the expectation is taken with respect to the randomness of π`. The Bayesian regret then is defined as BRL(π) = E[RL(E , π)] , where the expectation is taken with respect to the prior distribution of E . At each episode, TS finds π`TS = argmax π∈Π V E`1,π(s ` 1) , where E` is a sample from the posterior distribution of E , e.g., E` ∼ P(E ∈ ·|D`) . Notations Let (Ω,F ,P) as a measurable space. A random variable X is a measureable function X : Ω → E from a set of possible outcomes Ω to a measurable space E. Now P(X ∈ ·) is a probability measure that maps from F to [0, 1]. D` is another random variable from Ω to a measurable space Y . Then P(X ∈ ·|D`) is a probability kernel that maps from Ω×F → [0, 1]. We write P`(·) = P(·|D`), E`[·] = E[·|D`] and also define the conditional mutual information I`(X;Y ) = DKL(P((X,Y ) ∈ ·|D`)||P(X ∈ ·|D`) ⊗ P(Y ∈ ·|D`)). For a random variable χ we define: Iπ` (χ;H`,h) = DKL(P`,π((χ,H`,h) ∈ ·)||P`,π(χ ∈ ·)⊗ P`,π(H`,h ∈ ·)) , where P`,π is the law of χ and the history induced by policy π interacting with a sample from the posterior distribution of E given D`. We define Ē` as the mean MDP where for each state-action pair (s, a), P Ē`h (·|s, a) = E`[P Eh (·|s, a)] is the mean of posterior measure. 4 Learning the whole environment The core design of IDS for RL relies on a notion of information ratio. The information ratio for a policy π at episode ` is defined as Γ`(π, χ) := (E`[V E1,π∗(s`1)− V E1,π(s`1)])2 Iπ` (χ;H`,H) , (4.1) where χ is the learning target to prioritize information sought by the agent. The choice of χ plays a crucial role in designing the IDS and could lead to different regret bounds and computational methods. We first consider the most natural choice of χ which is the whole environment E . 4.1 Vanilla IDS Vanilla-IDS takes the whole environment E as the learning target and at the beginning of each episode, the agent computes a stochastic policy: π`IDS = argmin π∈Π [ Γ`(π) := (E`[V E1,π∗(s`1)− V E1,π(s`1)])2 Iπ` (E ;H`,H) ] . (4.2) Define the worst-case information ratio Γ∗ such that Γ`(π`IDS) ≤ Γ∗ for any ` ∈ [L] almost surely. The next theorem derives a generic regret bound for vanilla-IDS in terms of Γ∗ and the mutual information between E and the history. Theorem 4.1. A generic regret bound for vanilla-IDS is BRL(πIDS) ≤ √ E[Γ∗]I (E ;DL+1)L . The proof is deferred to Appendix A.1 and follows standard information-theoretical regret decomposition and the chain rule of mutual information that originally was exploited by Russo and Van Roy [2014]. For tabular MDPs, it remains to bound the E[Γ∗] and I (E ;DL+1) separately. Lemma 4.2. The worst-case information ratio for tabular MDPs is upper bounded by E[Γ∗] ≤ 2SAH3 . We sketch the main steps of the proof and defer the full proof to Appendix A.2. Proof sketch. Since vanilla-IDS minimizes the information ratio over all the policies, we can bound the information ratio of vanilla-IDS by the information ratio of TS. • Step one. Our regret decomposition uses the value function based on Ē` as a bridge: E` [ V E1,π∗(s ` 1)− V E1,π`TS(s ` 1) ] = E` [ V E1,π∗(s ` 1)− V Ē` 1,π`TS (s`1) ] ︸ ︷︷ ︸ I1 +E` [ V Ē` 1,π`TS (s`1)− V E1,π`TS(s ` 1) ] ︸ ︷︷ ︸ I2 . Note that conditional on D`, the law of π`TS is the same as the law of π∗ and both π∗ and π`TS are independent of Ē`. This implies E`[V Ē` 1,π`TS (s`1)] = E`[V Ē` 1,π∗(s ` 1)]. • Step two. Denote ∆Eh(s, a) = Es′∼PEh (·|s,a)[V E h+1,π∗(s ′)] − Es′∼P Ēh (·|s,a)[V E h+1,π∗(s ′)] as the value function difference. Inspired by Foster et al. [2021], with the use of state-action occupancy measure and Lemma D.3, we can derive I1 = H∑ h=1 E` ∑ (s,a) dĒ`h,π∗(s, a) (E`[dĒ`h,π∗(s, a)])1/2 (E`[dĒ`h,π∗(s, a)]) 1/2∆Eh(s, a) . Applying the Cauchy–Schwarz inequality and Pinsker’s inequality (see Eqs. (A.2)-(A.4) in the appendix for details), we can obtain I1 ≤ √ SAH3 ( H∑ h=1 E` [ EĒ` π`TS [ 1 2 DKL ( P Eh (·|s`h, a`h)||P Ē` h (·|s ` h, a ` h) )]])1/2 , where we interchange π`TS and π ∗ again and EĒ` π`TS is taken with respect to s`h, a ` h and E` is taken with respect to π`TS and E . • Step three. It remains to establish the following equivalence of above KL-divergence and the information gain (Lemma A.1): H∑ h=1 E` [ EĒ` π`TS [ DKL ( P Eh (·|sh, ah)||P Ē` h (·|sh, ah) )]] = Iπ ` TS ` (E ;H`,H) . A crucial step is to use the linearity of the expectation and the independence of priors over different layers (from the product prior as we assumed in Section 3) to show P`,π`TS(sh−1 = s, ah−1 = a) = P Ē` π`TS (sh−1 = s, ah−1 = a) . Combining Steps 1-3, we can reach the conclusion and the bound for I2 is similar. The next lemma directly bounds the mutual information for tabular MDPs. Lemma 4.3. The mutual information can be bounded by I(E ;DL+1) ≤ 2S2AH log (SLH) . The proof relies on the construction of Bayes mixture density and a covering set for KL-divergence and is deferred to Appendix A.3. Combining Theorem 4.1, Lemmas 4.2 and 4.3 yields the following: Theorem 4.4 (Regret bound for tabular MDPs). Suppose πIDS = {π`IDS}L`=1 is the vanilla IDS policy. The following Bayesian regret bound holds for tabular MDPs BRL(πIDS) ≤ √ 8S3A2H4L log(SLH) . Although this regret bound is sub-optimal, this is the first sub-linear prior-free Bayesian regret bound for vanilla-IDS. Remark 4.5. It is worth mentioning that Lu and Van Roy [2019], Lu [2020] also derived Bayesian regret bound using information-theoretical tools but only hold for a specific Dirichlet prior as well other distribution-specific assumptions. Their proof heavily exploits the property of Dirichlet distribution and can not easily be extended to prior-free regret bounds. In the context of finite-horizon MDPs, Lu et al. [2021] considered a conditional-IDS such that at each time step, conditional on s`h, conditional-IDS takes the action according to πh(·|s`h) = argmin ν∈∆A ( E` [ V Eh,π∗(s ` h)−QEh,π∗(s`h, Ah) ])2 I` ( χ; (Ah, QEh,π∗(s ` h, Ah)) ) , where Ah is sampled from ν. Conditional-IDS defined the information ratio per-step rather than per-episode such that it only needs to optimize over action space rather than the policy space. This offers great computational benefits but there is no regret guarantee for conditional-IDS. Recently, Hao et al. [2022] has demonstrated the theoretical limitation of conditional-IDS in contextual bandits. 4.2 Regularized IDS Computing an IDS policy practically usually involves two steps: 1. approximating the information ratio; 2. optimizing the information ratio. In bandits where the optimal policy is only a function of action space, optimizing Eq. (4.2) is a convex optimization problem and has an optimal solution with at most two non-zero components (Russo and Van Roy [2018, Proposition 6]). However in MDPs where the optimal policy is a mapping from the state space to the action space, vanilla-IDS needs to traverse two non-zero components over the full policy space which suggests the computational time might grow exponentially in S and H . To overcome this obstacle, we propose regularized-IDS that can be efficiently computed by any dynamic programming solver and enjoy the same regret bound as vanilla-IDS. At each episode `, regularized-IDS finds the policy: π`r-IDS = argmax π∈Π E`[V E1,π(s`1)] + λI` ( E ;Hπ`,H ) , (4.3) where λ > 0 is a tunable parameter. To approximate the objective function in Eq. (4.3), we assume the access to a posterior sampling oracle. Definition 4.6 (Posterior sampling oracle). Given a prior over E and history D`, the posterior sampling oracle, SAMP, is a subroutine which returns a sample from the posterior distribution P`(E). Multiple calls to the procedure result in independent samples. Remark 4.7. SAMP can be exactly obtained when the conjugate prior such as Dirichlet distribution is put on the transition kernel. When one uses neural nets to estimate the model, SAMP can be approximated by epistemic neural networks [Osband et al., 2021a], a general framework to quantify uncertainty for neural nets. The effectiveness of different epistemic neural networks such as deep ensemble, dropout and stochastic gradient MCMC has been examined empirically by Osband et al. [2021b]. We compute π`r-IDS in two steps: • Firstly, we prove an equivalent form of the objective function in Eq. (4.3) using the chain rule of mutual information. Define r′h(s, a) as an augmented reward function: r′h(s, a) = rh(s, a) + λ ∫ DKL ( P Eh (·|s, a)||P Ē` h (·|s, a) ) dP`(E) . Proposition 4.8. The following equivalence holds E`[V E1,π(s`1)] + λIπ` (E ;H`,H) = EĒ`π [ H∑ h=1 r′h(sh, ah) ] . The proof is deferred to Appendix A.4. • Given SAMP, the augmented reward r′h and the MDP Ē` can be well approximated by Monte Carlo sampling. Therefore, at each episode `, finding π`r-IDS is equivalent to find an optimal policy based on a computable and augmented MDP {P Ē`h , r′h}Hh=1. This can be solved efficiently by any dynamic programming solver such as value iteration or policy iteration. In the end, we show that π`r-IDS enjoys the same regret bound as vanilla-IDS when the tunable parameter is carefully chosen. Theorem 4.9. By choosing λ = √ LE[Γ∗]/I(E ;DL+1), we have BRL(π r-IDS) ≤ √ 3 2 LE[Γ∗]I(E ;DL+1) . The proof is deferred to Appendix A.5. Let M1,M2 be upper bounds of E[Γ∗] and I(E ;DL+1) respectively. In practice, we could conservatively choose λ = √ LM1/M2 such that BRL(πr-IDS) ≤√ 3/2M1M2L. From Lemmas 4.2 and 4.3 for tabular MDPs, we could choose M1 = 2SAH3 and M2 = 2S 2AH log(SLH). Remark 4.10. Russo and Van Roy [2018, Section 9.3] also considered a tunable version of IDS (for bandits) but took a square form of E`[V E1,π(s`1)]. While this makes no difference in bandits setting, this prevented us to use dynamic programming solver in RL setting. We are also inspired by Foster et al. [2021, Section 9.3] who studied the relationship between information ratio and Decision-Estimation Coefficient. 5 Learning a surrogate environment When the state space is large, the agent could take too much information to learn exactly the whole environment E which is reflected through I(E ;DL+1). A key observation is that different states may correspond to the same value function who eventually determines the behavior of the optimal policy. Based on the rate-distortion theory developed in Dong and Van Roy [2018], we reduce this redundancy and construct a surrogate environment that needs less information to learn. 5.1 A rate distortion approach The rate-distortion theory [Cover and Thomas, 1991] addresses the problem of determining the minimal number of bits per symbol that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding an expected distortion. It was recently introduced to bandits community to develop sharper bounds for linear bandits [Dong and Van Roy, 2018] and time-sensitive bandits [Russo and Van Roy, 2022]. We take a similar approach to construct a surrogate environment. Surrogate environment Suppose there exists a partition {Θk}Kk=1 over Θ such that for any E , E ′ ∈ Θk and any k ∈ [K], we have V E1,π∗E (s ` 1)− V E ′ 1,π∗E (s`1) ≤ ε , (5.1) where ε > 0 is the distortion tolerance and we write the optimal policy explicitly depending on the environment. Let ζ be a discrete random variable taking values in {1, . . . ,K} that indicates the region E lies such that ζ = k if and only if E ∈ Θk. Therefore, ζ can be viewed as a statistic of E and less informative than E if K is small. The next lemma shows the existence of the surrogate environment based on the partition. Lemma 5.1. For any partition {Θk}Kk=1 and any ` ∈ [L], we can construct a surrogate environment Ẽ∗` ∈ Θ which is a random MDP such that the law of Ẽ∗` only depends on ζ and E` [ V E1,π∗E (s ` 1)− V E1,π`TS(s ` 1) ] − E` [ V Ẽ∗` 1,π∗E (s`1)− V Ẽ∗` 1,π`TS (s`1) ] ≤ ε . (5.2) The concrete form of Ẽ∗` is deferred to Eq. (B.1) in the appendix. Surrogate IDS We refer the IDS based on the surrogate environment Ẽ∗` as surrogate-IDS that minimizes π`s-IDS = argmin π∈Π (E`[V E1,π∗(s`1)− V E1,π(s`1)]− ε)2 Iπ` (Ẽ∗` ;H`,H) , (5.3) for some parameters ε > 0 the will be chosen later. Denote the surrogate information ratio of TS as Γ̃ = max `∈[L] ( E` [ V Ẽ∗` 1,π∗(s ` 1)− V Ẽ∗` 1,π`TS (s`1) ])2 Iπ ` TS ` (Ẽ∗` ;H`,H) . We first derive a generic regret bound for surrogate IDS in the following theorem. Theorem 5.2. A generic regret bound for surrogate IDS is BRL(πs-IDS) ≤ √ E[Γ̃]I(ζ;DL+1)L+ Lε . We defer the proof to Appendix B.2. Given ζ, we have Ẽ∗` andH`,H are independent under the law of P`,π`s-IDS . By the data processing inequality, the proof uses the fact that Iπ ` s-IDS ` (Ẽ ∗ ` ;H`,H) ≤ I π`s-IDS ` (ζ;H`,H) . Comparing with regret bound of vanilla-IDS in Lemma 4.1, the regret bound of surrogate-IDS depends on the information gain about ζ rather than the whole environment E . If there exists a partition with small covering number K, the agent could pay less information to learn. The second term Lε is the price of distortions. In the following, we will bound the E[Γ̃] and I (ζ;DL+1) for tabular and linear MDPs separately. 5.2 Tabular MDPs We first show the existence of the partition required in Lemma 5.1 for tabular MDPs and an upper bound of the covering number K. Lemma 5.3. There exists a partition {Θεk}Kk=1 over Θ such that for any k ∈ [K] and E1, E2 ∈ Θεk, V E11,π∗E1 (s1)− V E21,π∗E1 (s1) ≤ ε , and the log covering number satisfies log(K) ≤ SAH log(4H2/ε). The proof is deferred to Lemma B.3. For tabular MDPs, the mutual information between ζ and the history can be bounded by I(ζ;DL+1) ≤ H(ζ) ≤ log(K) ≤ SAH log(4H2/ε) , where H(·) is the Shannon entropy. Comparing with Lemma 4.3 when learning the whole environment, learning the surrogate environment saves a factor of S through the bound of mutual information. Lemma 5.4. The surrogate information ratio for tabular MDPs is upper bounded by E[Γ̃] ≤ 2SAH3 . The proof is the same as Lemma 4.2 and thus is omitted. Putting Lemmas 5.3-5.4 yields an improved bound for tabular MDPs using surrogate-IDS. Theorem 5.5 (Improved regret bound for tabular MDPs). By choosing ε = 1/L, the regret bound of surrogate-IDS for tabular MDPs satisfies BRL(πs-IDS) ≤ √ 2S2A2H4L log(4HL) . For tabular MDPs, surrogate-IDS improves the regret bound of vanilla-IDS by a factor of S. However, it is still away from the minimax lower bound by a factor of √ SAH . We conjecture surrogate-IDS can achieve the optimal bound with a price of lower order term but leave it as a future work. Remark 5.6. Although the existence of Ẽ∗` is established using a constructive argument, finding Ẽ∗` needs a grid search and is not computationally efficient. 5.3 Linear MDPs We extend our analysis to linear MDPs that is a fundamental model to study the theoretical properties of linear function approximations in RL. All the proofs are deferred to Appendix B.4-B.5. Definition 5.7 (Linear MDPs [Yang and Wang, 2019, Jin et al., 2020]). Let φ : S × A → Rd be a feature map which assigns to each state-action pair a d-dimensional feature vector and assume ‖φ(s, a)‖2 ≤ 1. An MDP is called a linear MDP if for any h ∈ [H], there exist d unknown (signed) measures ψ1h, . . . , ψ d h over S , such that for any (s, a) ∈ S ×A, we have Ph(·|s, a) = 〈φ(s, a), ψh(·)〉 , where ψh = (ψ1h, . . . , ψ d h). Let us denote Θ Lin be the parameter space of linear MDPs and assume ‖ ∑ s′ ψh(s ′)‖ 2 ≤ Cψ . Note that the degree of freedom of linear MDPs still depends on S which implies that I(E ;DL+1) may still scale with S. Therefore, we must learn a surrogate environment rather than the whole environment for linear MDPs based on the current regret decomposition in Theorem 4.4. We first show the existence of a partition over linear MDPs with the log covering number only depending on the feature dimension d. Lemma 5.8. There exists a partition {Θεk}Kk=1 over ΘLin such that for any k ∈ [K] and E1, E2 ∈ Θk, V E11,π∗E1 (s1)− V E21,π∗E1 (s1) ≤ ε , and the log covering number satisfies log(K) ≤ Hd log(H2Cψ/ε+ 1). For linear MDPs, the mutual information can be bounded by I(ζ;DL+1) ≤ H(ζ) ≤ log(K) ≤ Hd log(H2Cψ/ε+ 1) . Lemma 5.9. The surrogate information ratio of linear MDPs is upper bounded by E[Γ̃] ≤ 4H3d . Theorem 5.10 (Regret bound for linear MDPs). By choosing ε = 1/L, the regret bound of surrogate IDS for linear MDPs satisfies BRL(πs-IDS) ≤ √ 4H4d2L log(H2CψL+ 1) + 1 . This Bayesian bound improves the O(d3/2H2 √ L) frequentist regret of LSVI-UCB [Jin et al., 2020] by a factor of √ d and matches the existing minimax lower bound O( √ H3d2L) [Zhou et al., 2021] up to a H factor. However, we would like to emphasize that this is not an apples-to-apples comparison, since in general frequentist regret bound is stronger than Bayesian regret bound. 5.4 Regret bounds for TS As a direct application of our rate-distortion analysis, we provide Bayesian regret bounds for Thompson sampling. Theorem 5.11. A generic regret bound for TS is BRL(πTS) ≤ √ E[Γ̃]I(ζ;DL+1)L+ Lε . This implies for tabular and linear MDPs, TS has the same regret bound as surrogate-IDS. Note that the computation of TS does not need to involve the surrogate environment Ẽ∗` so once the posterior sampling oracle is available, computing the policy is efficient. Howevern when the worstcase information ratio cannot be optimally bounded by the information ratio of TS, IDS demonstrates better regret bounds than TS, such as bandits with graph feedback [Hao et al., 2022] and sparse linear bandits [Hao et al., 2021]. 6 Conclusion In this paper, we derive the first prior-free Bayesian regret bounds for information-directed RL under tabular and linear MDPs. Theoretically, it will be of great interest to see if any version of IDS can achieve the O( √ SAH3L) minimax lower bounds for tabular MDPs. Acknowledgements We would like to thank Johannes Kirschner for helpful initial discussions.
1. What is the focus and contribution of the paper regarding Information-Directed Sampling methods in MDP settings? 2. What are the strengths of the proposed algorithm and analysis? 3. Do you have any concerns or issues with the discussion related to Γ ∗ in Sec. 3.1 and its proof? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors studied the provable efficient Information-Directed Sampling (IDS) methods in MDP setting. They first proposed vanilla-IDS and then derived a prior-free Bayesian regret bound for it. After that, for the sake of computational efficiency, they proposed another variant called regularized-IDS. Besides, they improved the regret bound by learning a surrogate environment. Beyond the tabular setting, they also extended their results to linear MDP. Strengths And Weaknesses Strengths This paper has an important contribution to understanding IDS methods in the MDP setting. The algorithm and analysis look novel and interesting. The paper writing looks good to me. Weakness I only have a small issue about the rigorousness in the discussion related to Γ ∗ in Sec. 3.1 and related proof. It seems to me that Γ l ( π I D S l ) is not a constant across l = 1 , 2 , . . . , L , while Γ ∗ is defined to be the worst-case information ratio and upper bounds Γ l ( π I D S l ) for all l ∈ [ L ] . As a result, there might exists some l ¯ , l ~ ∈ [ L ] , such that Γ ∗ is attained at l ¯ , but at l ~ , we have Γ l ~ ( π I D S l ~ ) < Γ ∗ . Therefore, although we always have Γ l ( π I D S l ) ≤ Γ l ( π T S l ) , it is possible that Γ l ( π I D S l ) ≤ Γ l ( π T S l ) < Γ ∗ when l = l ~ . As a result, I think the argument Γ ∗ ≤ Γ l ( π T S l ) (Line 482 in the proof of Lem. 3.2) is not correct (but I guess one can recover the same regret upper bound without introducing Γ ∗ and therefore there will be no such issue). If the authors can fix the issue I mentioned above, I would like to increase my score correspondingly. Questions Please check the Weakness section above. Limitations N.A.
NIPS
Title Learning Articulated Rigid Body Dynamics with Lagrangian Graph Neural Network Abstract Lagrangian and Hamiltonian neural networks (LNNs and HNNs, respectively) encode strong inductive biases that allow them to outperform other models of physical systems signicantly. However, these models have, thus far, mostly been limited to simple systems such as pendulums and springs or a single rigid body such as a gyroscope or a rigid rotor. Here, we present a Lagrangian graph neural network (LGNN) that can learn the dynamics of articulated rigid bodies by exploiting their topology. We demonstrate the performance of LGNN by learning the dynamics of ropes, chains, and trusses with the bars modeled as rigid bodies. LGNN also exhibits generalizability—LGNN trained on chains with a few segments exhibits generalizability to simulate a chain with large number of links and arbitrary link length. We also show that the LGNN can simulate unseen hybrid systems including bars and chains, on which they have not been trained on. Specically, we show that the LGNN can be used to model the dynamics of complex real-world structures such as the stability of tensegrity structures. Finally, we discuss the non-diagonal nature of the mass matrix and its ability to generalize in complex systems. 1 Introduction and Related Works Movements of a robotic arm, rolling ball, or falling chain can be characterized by rigid body motion [1, 2]. Understanding the dynamics of the motion is crucial in several applications including robotics, human-robot interaction, planning, and computer graphics [3, 1]. Traditionally, the rigid body mechanics is studied in the framework of classical mechanics, which relies on either forcebased or energy-based approaches [4]. Force-based approaches involve the computation of all the unknown forces based on the equations of equilibrium and hence is cumbersome for large structures. Energy-based approaches present an elegant formalism which involve the computation of a scalar quantity representing the state of a system, namely, Lagrangian (L = T − V), which is the difference between the kinetic (T (q, q̇)) and potential (V(q)) energies, or Hamiltonian (H = T + V), which represents the total energy of the system. This scalar quantity can, in turn, be used to predict the dynamics of the system. However, the functional form governing this scalar quantity may not be The code is available at https://github.com/M3RG-IITD/rigid_body_dynamics_graph 36th Conference on Neural Information Processing Systems (NeurIPS 2022). known a priori in many cases [5]. Thus, learning the dynamics of rigid bodies directly from the trajectory can simplify and accelerate the modeling of these systems [5, 6, 7, 8]. Learning the dynamics of particles has received much attention recently using physics-informed approaches [9]. Among these, Lagrangian neural networks (LNNs) and Hamiltonian neural networks (HNNs) are two physics-informed neural networks with strong inductive biases that outperform other learning paradigms of dynamical systems [10, 11, 12, 8, 6, 13, 7, 14]. In this approach, a neural network is trained to learn the L (orH) of a system based on its conguration (q, q̇). The L is then used along with the Euler-Lagrange (EL) equation to obtain the time evolution of the system. Note that the training of LNNs is performed by minimizing the error on the predicted trajectory with respect to the actual trajectory. Thus, LNNs can effectively learn the Lagrangian directly from the trajectory of a multi-particle system [6, 13]. Most of the works on LNN has focused on relatively simpler particle-based systems such as springs and pendulums [15, 16, 6, 13, 7, 10, 17]. This approach models a rigid body, for instance a ball, as a particle and predicts the dynamics. This approach thus ignores the additional rotational degrees of freedom of the body due to its nite volume. Specically, while a particle in 3D has three degrees of freedom (translational), a rigid body in 3D has six degrees of freedom (translational and rotational). Thus, the dynamics and energetics associated with these degrees of motions are lost by modeling a rigid body as a particle. To the best of authors’ knowledge, thus far, only one work has attempted to learn rigid body dynamics using LNNs and HNNs, where it was demonstrated the dynamics of simple rigid bodies such as a gyroscope or rotating rotor can be learned [13]. However, the LNNs used in this work, owing to their fully connected MLP architecture, are transductive in nature. An LNN trained on a double-pendulum system or 3-spring system can be used only for the same system and does not generalize to a different system size such as 3-pendulum or 5-spring, respectively. In realistic situations the number of particles in a system can vary arbitrarily, and accordingly, a large number of trained models might be required to model these systems. An alternate approach to model these systems would be to use a graph neural network (GNN) [18, 19, 5, 15, 16], which, once trained, can generalize to arbitrary system sizes. GNNs have been widely used to model physical and atomic systems extensively due to their inductive bias [20, 21, 22, 15, 16]. GNNs have also been used to model rigid bodies mainly following two approaches, namely, particlebased [19] and lumped mass [22, 23] methods. In the rst approach, a rigid body is discretized into nite number of particles and the motion of the individual particles are learned to predict the dynamics of rigid body [19]. Note that this approach is philosophically similar to mess-less methods such as smoothed-particle hydrodynamics (SPH) [24] or peridynamics (PD) [25], where the time-evolution of a continuum body is simulated by discretizing the domain using particles. This approach [19], although useful, have several limitations, namely, it does not (i) conserve physical quantities such as energy when simulated over a long duration, and (ii) generalize to a different timestep of forward simulation than the one on which it is trained. In the second approach, a rigid body is modeled as a lumped mass [22, 26], the dynamics of which is learned by assuming this lumped mass as a particle. For instance, the dynamics of a chain is modeled by discretizing the chain to smaller segments and modeling each segment as a lumped mass. As mentioned earlier, this approach leads to the loss of additional degrees of freedom that are associated with a rigid body. Here, we present a Lagrangian graph neural network (LGNN) framework that can learn the dynamics of rigid bodies. Specically, exploiting the topology of a physical system, we show that a rigid body can be modeled as a graph. Further, the Lagrangian of the graph structure can be learned directly by minimizing the loss on the predicted trajectory with respect to the actual trajectory of the system. The major contributions of the work are as follows. • Topology aware modeling of rigid body. We present a graph-based model for articulated rigid bodies such as in-extensible ropes, chains, or trusses. Further, we demonstrate using LGNN that the dynamics of these systems can be learned in the Lagrangian framework. • Generalizability to arbitrary system sizes. We show that LGNN can generalize to arbitrary system sizes once trained. • Generalizability to complex unseen topology. We demonstrate that the LGNN can generalize to unseen topology, that is, links with varying lengths, a combination of truss and chain structures, and different boundary conditions. Altogether, we demonstrate that LGNN can be a strong framework for simulating the dynamics of articulated rigid bodies. 2 Dynamics of Rigid Bodies The dynamics of a physical system can be represented as q̈ = F (q, q̇, t), where q, q̇ RD is a function of time (t) for a system with D degrees of freedom. The future states or trajectory of the system can be predicted by integrating these equations to obtain q(t + 1) and so on. While there are several physics-based methods for generating the dynamics of the system such as d’Alembert’s principle, Newtonian, Lagrangian, or Hamiltonian approaches, all these approaches result in the equivalent sets of equations [3]. The two broad paradigms for modeling the dynamics involve force- and energy-based approaches. Energy-based approaches is an elegant framework, which relies on the computation of a single scalar quantity, for instance energy, that represents the state of system. The dynamics of the system is, in turn, computed based on this scalar quantity. Among the energy-based approaches, Lagrangian formulation has been widely used to predict the dynamics of particles and rigid bodies by computing the Lagrangian L of the system. The standard form of Lagrange’s equation for a system with holonomic constraints is given by ddt ∂L ∂q̇ − ∂L ∂q = 0, and the Lagrangian is L(q, q̇, t) = T (q, q̇, t)−V(q, t) with T (q, q̇, t) and V(q, t) representing the total kinetic energy of the system and the potential function from which generalized forces can be derived. Accordingly, the dynamics of the system can be represented using EL equations as q̈i = ∂2L ∂q̇2i −1 ∂L ∂qi − ∂L ∂q̇i∂qi q̇i . Modied Euler-Lagrange Equation. A modied version of the EL can be used in cases where some of the terms involved in the equation can be decoupled. This formulation allows explicit incorporation of constraints (holonomic and Pfafan) and additional dissipative terms for friction or drag [3, 1]. In rigid body motion, Pfafan constraints can be crucial in applications such as multi-ngered grasping where, the velocity of two or more ngers are constrained so that the combined geometry formed is able to catch or hold an object. A generic expression of constraints for these systems that accounts for both holonomic and Pfafan can be A(q)q̇ = 0, where, A(q) Rk×D represents k velocity constraints. In addition, drag, friction or other dissipative terms of a system can be expressed as an additional forcing term in the EL equation. It is worth noting that EL equation, by nature, is energy conserving. Hence, the additional dissipative terms are crucial for modeling realistic systems with friction and drag. If these terms are not included, the system will essentially try to simulate an energy preserving trajectory, thereby resulting in huge errors in the dynamics [17]. Considering the additional forces mentioned above, the modied EL equation can be written as: d dt q̇L−qL+AT (q)λ−Υ− F = 0 (1) where AT forms a non-normalized basis for the constraint forces, λ Rk, known as the Lagrange multipliers, gives the relative magnitudes of these force constraints,Υ represents the non-conservative forces, such as friction or drag, which are not directly derivable from a potential, and F represents any external forces acting on the system. This equation can be modied to obtain q̈ as: q̈ = M−1 −Cq̇ +Π+Υ−AT (q)λ+ F (2) where M = ∂∂q̇ ∂L ∂q̇ represents the mass matrix, C = ∂ ∂q ∂L ∂q̇ represents Coriolis-like forces, and Π = ∂L∂q represents the conservative forces derivable from a potential. Differentiating the constraint equation gives A(q)q̈ + Ȧ(q)q̇ = 0. Solving λ (see A.2) and substituting in Eq. 2, we obtain q̈ as q̈ = M−1 Π− Cq̇ +Υ−AT (AM−1AT )−1 AM−1(Π− Cq̇ +Υ+ F ) + Ȧq̇ + F (3) For a system subjected to these forces, the dynamics can be learned using LNN by minimizing the loss on the predicted and observed trajectory, where the predicted acceleration ˆ̈q is obtained using the Equation 3. It is worth noting that in this equation, M,C, and Π can be directly derived from the L. Constraints on the systems are generally known as they generally form part of the topology. It is worth noting that there are some recent works that focus on learning constraints as well [8]. 3 Lagrangian Mechanics for Articulated Rigid Bodies In the case of particle systems such as spring or pendulum systems, the approach mentioned in Sec.2 can be directly used in conjunction with an LNN to learn the dynamics. In this case, the mass matrix M(q) remains constant with only diagonal entries mii in Cartesian coordinates. Inducing this as a prior knowledge, wherein the masses are parameterized as a diagonal matrix is shown to simplify the learning process [13]. However, in the case of an articulated rigid body, the mass matrix is non-diagonal in the Cartesian coordinates. Further, the kinetic energy term T becomes a function of both position and velocity. In other words, the kinetic energy also becomes a function of the topology. This makes learning the dynamics a complex problem especially in real-world complex structures such as trusses or tensegrities, which are a combination of bars, ropes, and chains. To this extent, we briey review the mechanics of a falling rope or chain as an example. Note that simple rigid bodies such as a gyroscope or rotating rotor has already been studied using LNNs [13]. Of our special interest are articulated rigid bodies that can be arbitrarily large such as chains, ropes or trusses, that can be divided into smaller constituent members. This is because, it is generally assumed that extending LNNs to large structures is a challenging problem [17]. Traditionally, the mechanics of chains or ropes are modeled using discrete models [2]. Figure 1 shows a discrete model of a rope of mass M and length L. The rope is discretized into n cylindrical rods or segments each having a mass mi = Mn and length li = Ln. These segments are considered to be rigid, and with a nite uniform cross-sectional area and volume. In order to replicate realistic dynamics of a rope, the li should be signicantly smaller than L. Note that in the case of a chain or truss, such articial discretization is not required and the bars associated with each segment can be directly considered as a rigid body. To formulate the L, the generalized coordinates with orientation of each link represented by ϕi = tan−1 yi−yi−1 xi−xi−1 can be considered. Placing the origin at the beginning of rst segment (see Figure 1), the center of mass of ith segment (xcmi , y cm i ) can be written in terms of generalized coordinates as xcmi = i−1 j=1 lj cosϕj + 1 2 li cosϕi, y cm i = i−1 j=1 lj sinϕj + 1 2 li sinϕi (4) Accordingly, the kinetic energy of the system is given by [2] T = 1 2 n i=1 mi(ẋ 2 i,cm + ẏ 2 i,cm) + Iiϕ̇ 2 i (5) where Ii = 112mil 2 i represents the moment of inertia of the rigid segment i. Similarly, the potential energy of the system can be expressed as: V = n i=1 migy cm i (6) where g represents the acceleration due to gravity. Finally, the Lagrangian of the system can be obtained as L = T − V , which can be substituted in the EL equation to obtain the dynamics of the rigid body. To learn the dynamics of an articulated rigid body, we employ the approach shown in Figure 2. Specically, we model a physical system as a graph. Further, the Lagrangian of system is learned by decoupling the potential and kinetic energy, each of which are learned by two GNNs, namely, GV and GT . Finally, the Lagrangian is computed as L = T − V . This framework is trained end-to-end based by minimizing the loss on the acceleration predicted by the LGNN using EL equation with respect to the ground truth. In this section, we describe the LGNN architecture for rigid bodies in detail (See Figure 2 for an overview). We empirically show that the dynamics of a rigid body can be learned by LGNN. In addition, due to the inductive nature of the graph architecture, once trained on a small system, LGNN can generalize to arbitrary system sizes and topology. Graph structure. Figure 1 shows a chain. The (undirected) graph of the physical system is constructed by considering the bars/segments of the chain as the edges and the connections as nodes. Here, edges represent the rigid bodies and nodes represent the connection between these rigid bodies. This is in contrast to earlier approaches used for particle-based systems, where node represented the particle position and edge represented the connections between them. Hereon, we use the notation G(U , E) to to represent the graph representation of a rigid body with U and E as its node and edge sets. Overview of the architecture. As shown in Figure 2, we use two GNNs; one to predict the potential energies and the other to predict kinetic energies. From these predictions the Lagrangian is computed. The error on the Lagrangian is minimized through an RMSE loss function to jointly train both the GNNs. The architecture of both the GNNs, shown in Figure 2, are identical. Note that the specic graph architecture used in the present work is inspired from previous works on LGNNs for particle-based systems [15, 16]. Input features. Each node ui U is characterized by its position qi = (xi, yi, zi), and velocity (q̇i). Each edge eij is characterized by its type tij , and the relative differences in the positions (∆qij = qi − qj) of its connecting nodes, and ωij = ∆qij ×∆q̇ij . The type tij is a discrete variable and is useful in distinguishing edges of different characteristics within a system (Ex. moment inertia or area of cross section of the edge). Note that the velocity of a rigid body represented by an edge is a function of the velocities of its end points in two and three dimensional spaces. Hence, we do not explicitly track edge velocities. Pre-Processing. In the pre-processing layer, we construct a dense vector representation for each node vi U and edge eij E using MLPs (multi-layer perceptrons). The exact operation for potential energy is provided below in Eqs.7-8. For kinetic energy, we input q̇i in Eq 7 instead of qi and ωij in Eq. 8 instead of ∆qij . h0i = squareplus(MLP(qi)) (7) h0ij = squareplus(MLP(one-hot(ti),∆qij)) (8) squareplus is an activation function. Message passing. To infuse structural information in the edge and node embeddings, we perform L layers of message passing, wherein the embedding in each layer l [1, ·, L] is computed as follows: hl+1ij = squareplus MLP hlij +W l E · hlihlj (9) Here,WlE is a layer-specic learnable weight vector and || represents concatenation operation. The node embeddings in a given layer l are learned as follows: hl+1i = squareplus MLP hli + j∈Ni WlU · hlij (10) Here, Ni = uj (ui, uj) E denotes the edges incident on node ui. Similar to WlE , WlU is a layer-specic learnable weight vector, which performs a linear transformation on the embedding of each incident edge. Following L layers of message passing, the nal node and edge representations in the Lth layer are denoted by zi = hLi and zij = h L ij respectively. Potential and kinetic energy prediction. The predicted potential energy of each edge (rigid body) is computed by passing its nal layer embedding through an MLP, i.e., vij = MLP(zi,j). The global predicted potential energy of the rigid body system is therefore the sum of the individual energies, i.e., V = ∀eij∈E vij . For kinetic energy, the computation is identical except that it occurs in the other GNN with parameters optimized for kinetic energy. Loss function. The predicted Lagrangian is simply the difference between the predicted kinetic energy and the potential energy. Using Euler-Lagrange equations, we obtain the predicted acceleration ̈qi(t) for each node ui. The ground truth acceleration is computed directly from the ground truth trajectory using the Verlet algorithm as: q̈i(t) = 1 (∆t)2 [qi(t+∆t) + qi(t−∆t)− 2qi(t)] (11) The parameters of the GNNs are trained to minimize the RMSE loss over the entire trajectory T: L = 1 U ∀ui∈U |T| t=2 q̈i(t)− ̈qi(t) 2 (12) Since the integration of the equations of motion for the predicted trajectory is also performed using the same algorithm as: q(t+∆t) = 2q(t)− q(t−∆t)+ q̈(∆t)2, this method is equivalent to training from trajectory/positions. 4 Empirical Evaluation In this section, we evaluate the ability of LGNN to learn rigid body dynamics. In addition, we evaluate the ability of LGNN to generalize to larger unseen system sizes, complex topology, and realistic structures such as tensegrity. 4.1 Experimental setup • Simulation environment. All the training and forward simulations are carried out in the JAX environment [21]. The graph architecture is implemented using the jraph package [27]. All the codes related to dataset generation and training are available in https://github.com/M3RGIITD/rigid_body_dynamics_graph. Software packages: numpy-1.20.3, jax-0.2.24, jax-md-0.1.20, jaxlib-0.1.73, jraph-0.0.1.dev0 Hardware: Memory: 16GiB System memory, Processor: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz •Baselines. As outlined earlier, there are very few works on rigid body simulations using graph-based approaches, where the graph is used to model the topology of the rigid body. To compare the performance of LGNN, we employ three baselines, namely, (i) a graph network simulator GNS, (ii) a Lagrangian graph network (LGN), and (iii) constrained Lagrangian neural network (CLNN). GNS employs a full graph network architecture [5, 12, 19] to predict the update in the position and velocity of node based on the present position and velocity. GNS has been shown to be a versatile model with the capability to simulate a wide range of physical systems [19]. LGN and CLNN employs the exact same equations as LGNN for computing the acceleration and trajectory and hence has the same inductive biases as LGNN in terms of the training and inference. However, while LGN employs a full graph network, CLNN employs a feed-forward multilayer perceptron. Details of the architectures and the hyperparameters of the baselines are provided in the Appendix A.5 and Appendix A.6, respectively. • Datasets and systems. To evaluate the performance LGNN, we selected n-chain/rope systems, where n = (4, 8, 16). All the graph based models are trained only on 4-segment chain system, which are then evaluated on other system sizes. Further, to evaluate the zero-shot generalizability of LGNN to large-scale unseen systems, we simulate 8-, and 16-segment chain systems. Further, to push the limits of LGNN, we evaluate the model trained on 4-segment chain on a 100-link system, and to complex shaped topologies involving truss members (long rigid members) and chains (short rigid members), which have more than 40 segments (see Figure 3). The massmi and moment of inertia Ii of all the members are maintained to be the same for all the segments irrespective of their length. To evaluate the generalizability to realistic systems, we also evaluate the performance on a 4-link system with different link properties and also with an external drag. The details of the experimental systems are given in Appendix A.1. Further, the detailed data-generation procedure is given in the Appendix A.4. • Evaluation Metric. Following the work of [13], we evaluate performance by computing the relative error in (1) the trajectory, known as the rollout error, given by RE(t) = q̂(t)− q(t)2(q̂(t)2 + q(t)2) and (2) energy violation error given by Ĥ−H2(Ĥ2 + H2). In addition, we also compute the geometric mean of rollout and energy error to compare the performance of different models [13]. Note that all the variables with a hat, for example x̂, represent the predicted values based on the trained model and the variables without hat, that is x, represent the ground truth. • Model architecture and training setup. For the graph architectures, namely, LGNN and GNS, all the neural networks are modeled as one hidden layer MLPs with varying number of hidden units. For all the MLPs, a square-plus activation function is used due to its double differentiability. In contrast to the earlier approaches, here, the training is not performed on trajectories. Rather, it is performed on 10000 data points generated from 100 trajectories for all the models. This dataset is divided randomly in 75:25 ratio as training and validation set. The model performance is evaluated on a forward trajectory, a task it was not explicitly trained for, of 1s. Note that this trajectory is ∼2-3 orders of magnitude larger than the training trajectories from which the training data has been sampled. The dynamics of n-body system is known to be chaotic for n ≥ 2. Hence, all the results are averaged over trajectories generated from 100 different initial conditions. Detailed model architecture associated with each of the models and the hyperparameters used in the training are provided in the Appendices A.5 and A.6, respectively. 4.2 Comparison with baselines Model performance. To compare the performance of LGNN with baselines, GNS, LGN [12, 6] and CLNN [13], we evaluate the evolution of energy violation and rollout error. It worth noting that GNS and LGN have been demonstrated only particle-based systems and not on rigid bodies. Hence, to make a fair comparison, we give the same node and edge input features as provided for the LGNN for both GNS and LGN, while training. All the models are trained on a 4-link system and evaluated on all other systems. In the case of CLNN, due to the fully connected architecture, the model is no inductive in nature. Hence, the model is trained and tested on the same system only, that is, the 4-link system. Detailed architecture of each of these systems are provided in Appendix A.5. Figure 4 shows the error in energy and rollout for LGNN, GNS, LGN, and CLNN. We observe that GNS, LGN and CLNN have a larger error in comparison to LGNN as shown in Figure 4 for both energy and rollout error, establishing the superiority of LGNN. To test the ability of LGNN to learn more complex systems, we consider two additional experiments. Specically, two similar 4-link systems, one with varying masses and moment of inertia, and the other subjected to a linear drag are evaluated in the Appendix A.7. Figures 8 and 14 show that LGNN is able to infer the dynamics in both these systems, respectively. Generalizability to different system sizes. Now, we analyze the performance of LGNN, trained on 4- link segment, on 8- and 16-link segments. We observe that LGNN exhibits comparable performances with respect to the 4-segment model, in terms of both energy violation error and rollout error, on systems with 8-, and 16-segments that are unseen by the model. In contrast, GNS exhibits relatively increased error in energy violation error and rollout error, although the error in LGN remains comparable for all systems. This suggests that the inductive bias in terms of the EL equations prevent the accumulation of error and allow improved generalization. However, the error in LGN is still orders magnitude higher than LGNN. This suggests that the architecture employed in LGNN is leading improved learning of the dynamics of the system. This conrms that LGNN can generalize to larger unseen system sizes when trained on a signicantly smaller system size. Note that the plots for CLNN are not shown for 8 and 16-links as the architecture cannot exhibit generalizability to larger system sizes. Finally, to push the limits, we infer the dynamics of a 100-link chain (see Fig. 15). We observe that the LGNN trained on 4-link can scale to a 100-link chain with comparable errors, conrming its ability to model large-scale structures. The trajectories of actual and trained models for some of these systems are provided as videos in the supplementary material (see Appendix A.3 for details). Generalizability to systems with different edge properties and external drag. Although the framework presented here is generic, the results were limited to systems with similar edge properties. Further, dissipative forces such as drag were not considered in these systems. In order to evaluate the model to incorporate these effect, we consider a 4-link system with different edge properties (see Appendix A.7)and also a system with drag. We observe that the LGNN presented can model systems with varying link properties and drag with comparable errors (see Figures 8 and 14). These results conrm that the LGNN framework can be used for realistic systems with arbitrary link properties and external dissipative forces. 4.3 Zero-shot generalizability In the conventional LNNs employing feed forward MLPs, the training and test system have the same number of particles and degrees of freedom. In other words, an LNN trained for an n-particle system cannot be used to perform inference on anm-particle system. In contrast, we show here that LGNN trained on a small 4-link system can be used to perform forward simulations on other unseen complex systems such as 100-link system, and tensegrity structures. This ability to infer on different unseen system sizes and topology is referred to as zero-shot generalizability. In order to analyze the zero-shot generalizability of the trained LGNN to simulate complex real-world geometries and structures, we evaluate the ability of LGNN to model the dynamics of tensegrity and lattice-like structures (see Fig. 3). Note that tensegrity structures are truss-like structures comprising of both tension and compression-members. The topology of a tensegrity structure is designed so that the compression members are always bars and the tension members are always ropes. Here, we analyse the ability LGNN to model the equilibrium dynamics of two complex tensegrity structures and the lattice-like structure shown in Figure 3. To this extent, we use the LGNN trained on the 4-segment structure. We convert the rigid body structure to an equivalent graph and use the trained LGNN to predict the dynamics of the structure when released from the original conguration under gravity. Figure 5 shows the energy error and rollout for both the complex structures and the lattice-like structure shown in Figure 3. We note that the LGNN is able to generalize to a complex structure with varying bar lengths and topology with high accuracy. Specically, the energy violation and rollout error exhibits very low values for LGNN (∼ 10−4). Further, it saturates after a few initial timestep suggesting an equilibrium dynamics. In contrast, we observe that the error in GNS is very high and continues to increase until it reaches 1, which is the maximum it can take. This conrms the superior nature of LGNN to generalize to arbitrary topology, boundary conditions, and bar lengths, after training on a simple 4-segment chain with constant length segments. Visualization of the dynamics of the system T1, predicted by LGNN and the ground truth, is shown in Fig. 6. We observe that the deformed shapes predicted by LGNN are in excellent agreement with the ground truth. Note that since the initial conguration for the forward simulation is xed, it is not possible to generate error bars for the trajectory. 4.4 Nature of the learned mass matrix Finally, we investigate the nature of the mass matrix of LGNN for different systems. Note that in earlier approaches either the mass matrix was learned directly for a given system based on the EL equations [6], or it was assumed to be diagonal in the Cartesian coordinates [13], or the functional form of kinetic energy was assumed [7]. In the present approach, we do not make any assumptions on the nature of the mass matrix. In fact, for a rigid body, the mass matrix need not be diagonal in nature and depends on the actual topology of the structure. This raises an interesting question about the nature of the mass matrix learned by the LGNN and how it generalizes to arbitrary topologies. In order to investigate the nature of the mass matrix, we plot the mass matrix of the LGNN in Figure 7. Note that the mass matrix is computed directly from the Lagrangian asM = ∂2L∂q̇2, where L is obtained from the LGNN. First, we analyze the mass matrix of the 16-segment structure. We observe that the mass matrix is banded with a penta-diagonal band as expected for a chain structure. Now, we analyze the mass matrix for a complex structure T1. Interestingly, we observe that the mass matrix learned is non-diagonal in nature and is congruent with the complex topology of the structure (see Figure 7). This conrms that the mass matrix of LGNN is learned on-the-y during the forward simulation that provides the versatility for LGNN to simulation complex structures. 5 Conclusions In this work, we present a LGNN-based framework that can be used to simulate the dynamics of articulated rigid bodies. Specically, we present the graph architecture, which allows the decoupling of kinetic and potential energies, that can be used to compute the Lagrangian of the system, which when applied with EL equations can infer the dynamics. We show that LGNN can learn the dynamics from a small 4-segment chain and then generalize to larger system sizes. We also demonstrate the zeroshot generalizability of LGNN to arbitrary topology including a tensegrity structures. Interestingly, we show that LGNN can provide insights into the learned mass matrix, which can exhibit non-trivial structures in complex systems. This suggests the ability of LGNN to learn and infer the dynamics of complex real-life structures directly from the observables such as their trajectory. Limitations and future works. From the mechanics perspective, the LGNN assumes the knowledge of constraints. Learning constraints directly from the trajectory is useful. Similarly, extending LGNN to model contacts, collisions, and deformations allows more comprehesive learning of realistic systems. From the modeling perspective, in our message passing LGNN, all messages are provided equal important. Attention heads in message-passing neural networks have been shown to improve performance remarkably in several domains [28]. We plan to study the impact of attention in LGNN in our future works. Acknowledgments and Disclosure of Funding The authors thank the IIT Delhi HPC facility for providing the computational and storage resources.
1. What is the focus and contribution of the paper regarding predicting trajectories of rigid bodies? 2. What are the strengths of the proposed method, particularly its originality and technical soundness? 3. What are the weaknesses and limitations of the approach, such as assumptions about mass and moment of inertia, and the need for known constraints? 4. Do you have any questions or suggestions regarding notation, terminology, or minor issues in the paper? 5. How does the learned mass matrix compare to the ground truth mass matrix? 6. Are there any possible typos or errors in the paper that should be addressed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors introduce a Lagrangian graph neural network to predict the trajectory of a rigid body. The rigid body is represented as a graph where edges represent rigid bodies – e.g., links in a chain – and nodes represent connections between those rigid bodies. The position and velocity of each node is assumed to be given. From the input graph, the authors construct two graph neural networks which are trained simultaneously. One learns to predict the kinetic energy of the rigid body and the other learns to predict the potential energy of the rigid body and the Lagrangian is formed by computing the difference of these quantities. The networks are trained by minimizing the difference between the predicted acceleration (determined by the EL equations) and the ground truth acceleration of the system. Against previous work, the proposed approach shows strong predictive performance and an ability to generalizability to unseen structures. The mass and moment of inertia of all members in the graph are the same irrespective of their length. Strengths And Weaknesses Originality: The method appears to be new. The related work is well organized and the proposed approach is adequately situated in the context of existing methods. Quality: The submission appears to be technically sound, with the claims well supported by the empirical analysis. Clarity: The paper is well written and organized Significance: The paper addresses the issue of predicting physically plausible trajectories of a rigid body system, a challenging problem which has the potential to impact scientific discovery. However, there are several limitations of the approach, i.e., The mass and moment of inertia of all members in the graph are the same, and the constraints of the system are assumed to be known. Questions Questions: [55] should it be \psi = sin^(-1)((y_i - y_{i-1})/x_i - x_{i-1}))? Eqn 5: it looks like the kinetic energy is mass times position, shouldn’t it be mass times velocity [208] what is || in eqn 9? [309] How does the learned mass matrix compare to the gt mass matrix? Comments: [47] Possibly relevant citations [1]. The notation in eq 5 is a bit challenging (e.g., x_i^cm2) Consider using \mathcal{l}(\ddot{q}, \hat{\ddot{q}}) for the loss since \mathcal{L} is used for Lagrangian Possible typos: [37] abbreviation EL is used before it is defined [143] “This is because…” [152] “significantly smaller that” → “significantly smaller than” [184, 185] node → nodes [211] N_i = {u_j | (u_i, u_j) \in E} [215] “denoted as” → “denoted by” [1] Duong, Thai, and Nikolay Atanasov. "Hamiltonian-based Neural ODE Networks on the SE (3) Manifold For Dynamics Learning and Control." Robotics: Science and Systems (RSS). 2021. Limitations The authors note that the proposed approach assumes knowledge of system constraints.